Test Report: Docker_Windows 14420

                    
                      7d3b93abdd89ce8ebba3c81494e660414100c7c4:2022-06-29:24669
                    
                

Test fail (11/270)

x
+
TestFunctional/parallel/ServiceCmd (1987.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220629181245-2408 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220629181245-2408 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-7pm4f" [5a35bec7-0a31-421a-98b3-2ac8fb3946dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-7pm4f" [5a35bec7-0a31-421a-98b3-2ac8fb3946dc] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.1030223s
functional_test.go:1448: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service list: (7.2822054s)
functional_test.go:1462: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service --namespace=default --https --url hello-node
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service --namespace=default --https --url hello-node: exit status 1 (32m26.5469225s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53406

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220629181245-2408 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220629181245-2408 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name:         hello-node-54c4b5c49f-7pm4f
Namespace:    default
Priority:     0
Node:         functional-20220629181245-2408/192.168.49.2
Start Time:   Wed, 29 Jun 2022 18:20:02 +0000
Labels:       app=hello-node
pod-template-hash=54c4b5c49f
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-54c4b5c49f
Containers:
echoserver:
Container ID:   docker://a38ab82ec10c4e95af4b7651e032bffd9a4dadc58e74e0c5200616d6476c99af
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 29 Jun 2022 18:20:04 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4ncwx (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-4ncwx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-54c4b5c49f-7pm4f to functional-20220629181245-2408
Normal  Pulled     32m        kubelet, functional-20220629181245-2408  Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal  Created    32m        kubelet, functional-20220629181245-2408  Created container echoserver
Normal  Started    32m        kubelet, functional-20220629181245-2408  Started container echoserver

                                                
                                                
Name:         hello-node-connect-578cdc45cb-m2pgx
Namespace:    default
Priority:     0
Node:         functional-20220629181245-2408/192.168.49.2
Start Time:   Wed, 29 Jun 2022 18:19:27 +0000
Labels:       app=hello-node-connect
pod-template-hash=578cdc45cb
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
IP:           172.17.0.5
Controlled By:  ReplicaSet/hello-node-connect-578cdc45cb
Containers:
echoserver:
Container ID:   docker://9c2d58554a4c20ee271f9f165e72a7614362ca65826ee89ce8333a2d8423e82b
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 29 Jun 2022 18:19:54 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w544q (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-w544q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-connect-578cdc45cb-m2pgx to functional-20220629181245-2408
Normal  Pulling    33m        kubelet, functional-20220629181245-2408  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     32m        kubelet, functional-20220629181245-2408  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 20.9877549s
Normal  Created    32m        kubelet, functional-20220629181245-2408  Created container echoserver
Normal  Started    32m        kubelet, functional-20220629181245-2408  Started container echoserver

                                                
                                                
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220629181245-2408 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220629181245-2408 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.107.130.255
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30735/TCP
Endpoints:                172.17.0.6:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220629181245-2408
helpers_test.go:231: (dbg) Done: docker inspect functional-20220629181245-2408: (1.1016731s)
helpers_test.go:235: (dbg) docker inspect functional-20220629181245-2408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa",
	        "Created": "2022-06-29T18:13:37.710266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26189,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:13:38.7099878Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/hosts",
	        "LogPath": "/var/lib/docker/containers/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa/212bfdec5401cb249c3f201396cf8bb34e9f1ebe818d5bd6c85bf639a09ed2aa-json.log",
	        "Name": "/functional-20220629181245-2408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220629181245-2408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220629181245-2408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed66b2c038ad7d217ff416edf7875e311de0b4e899660b5392b47906501bf6e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220629181245-2408",
	                "Source": "/var/lib/docker/volumes/functional-20220629181245-2408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220629181245-2408",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220629181245-2408",
	                "name.minikube.sigs.k8s.io": "functional-20220629181245-2408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a270067982d790ed610e5bd163d8862e3483ce4b2983a8f340ab49ff87ede8b7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53086"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53087"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53088"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a270067982d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220629181245-2408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "212bfdec5401",
	                        "functional-20220629181245-2408"
	                    ],
	                    "NetworkID": "da3406170d3a0abd5dd8a6d823daa27e6ae73eddab3c356fd71e6fdc35be0102",
	                    "EndpointID": "556c4ed84bdd5c0f8652aefd3ff33363fd7c859c70f4777622e9e4d70d8d1bd9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220629181245-2408 -n functional-20220629181245-2408
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220629181245-2408 -n functional-20220629181245-2408: (6.7446379s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs -n 25: (8.0991036s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	|    Command     |                                                Args                                                 | Profile  |       User        | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| start          | -p                                                                                                  | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:20 GMT |                     |
	|                | functional-20220629181245-2408                                                                      |          |                   |         |                     |                     |
	|                | --dry-run --memory                                                                                  |          |                   |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                             |          |                   |         |                     |                     |
	|                | --driver=docker                                                                                     |          |                   |         |                     |                     |
	| dashboard      | --url --port 36195 -p                                                                               | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:20 GMT |                     |
	|                | functional-20220629181245-2408                                                                      |          |                   |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                              |          |                   |         |                     |                     |
	| start          | -p                                                                                                  | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:20 GMT |                     |
	|                | functional-20220629181245-2408                                                                      |          |                   |         |                     |                     |
	|                | --dry-run --alsologtostderr                                                                         |          |                   |         |                     |                     |
	|                | -v=1 --driver=docker                                                                                |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
	|                | ssh sudo cat                                                                                        |          |                   |         |                     |                     |
	|                | /etc/test/nested/copy/2408/hosts                                                                    |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
	|                | ssh sudo cat                                                                                        |          |                   |         |                     |                     |
	|                | /etc/ssl/certs/2408.pem                                                                             |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
	|                | ssh sudo cat                                                                                        |          |                   |         |                     |                     |
	|                | /usr/share/ca-certificates/2408.pem                                                                 |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
	|                | ssh sudo cat                                                                                        |          |                   |         |                     |                     |
	|                | /etc/ssl/certs/51391683.0                                                                           |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
	|                | ssh sudo cat                                                                                        |          |                   |         |                     |                     |
	|                | /etc/ssl/certs/24082.pem                                                                            |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
	|                | ssh sudo cat                                                                                        |          |                   |         |                     |                     |
	|                | /usr/share/ca-certificates/24082.pem                                                                |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT | 29 Jun 22 18:21 GMT |
	|                | ssh sudo cat                                                                                        |          |                   |         |                     |                     |
	|                | /etc/ssl/certs/3ec20f2e.0                                                                           |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:21 GMT |                     |
	|                | ssh sudo systemctl is-active                                                                        |          |                   |         |                     |                     |
	|                | crio                                                                                                |          |                   |         |                     |                     |
	| cp             | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
	|                | cp testdata\cp-test.txt                                                                             |          |                   |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
	|                | ssh -n                                                                                              |          |                   |         |                     |                     |
	|                | functional-20220629181245-2408                                                                      |          |                   |         |                     |                     |
	|                | sudo cat                                                                                            |          |                   |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |          |                   |         |                     |                     |
	| cp             | functional-20220629181245-2408 cp functional-20220629181245-2408:/home/docker/cp-test.txt           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
	|                | C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalparallelCpCmd3231594891\001\cp-test.txt |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
	|                | ssh -n                                                                                              |          |                   |         |                     |                     |
	|                | functional-20220629181245-2408                                                                      |          |                   |         |                     |                     |
	|                | sudo cat                                                                                            |          |                   |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |          |                   |         |                     |                     |
	| image          | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
	|                | image ls --format short                                                                             |          |                   |         |                     |                     |
	| image          | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:22 GMT |
	|                | image ls --format yaml                                                                              |          |                   |         |                     |                     |
	| ssh            | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT |                     |
	|                | ssh pgrep buildkitd                                                                                 |          |                   |         |                     |                     |
	| image          | functional-20220629181245-2408 image build -t                                                       | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:22 GMT | 29 Jun 22 18:23 GMT |
	|                | localhost/my-image:functional-20220629181245-2408                                                   |          |                   |         |                     |                     |
	|                | testdata\build                                                                                      |          |                   |         |                     |                     |
	| image          | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
	|                | image ls                                                                                            |          |                   |         |                     |                     |
	| image          | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
	|                | image ls --format json                                                                              |          |                   |         |                     |                     |
	| image          | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
	|                | image ls --format table                                                                             |          |                   |         |                     |                     |
	| update-context | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
	|                | update-context                                                                                      |          |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |          |                   |         |                     |                     |
	| update-context | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
	|                | update-context                                                                                      |          |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |          |                   |         |                     |                     |
	| update-context | functional-20220629181245-2408                                                                      | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 18:23 GMT | 29 Jun 22 18:23 GMT |
	|                | update-context                                                                                      |          |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |          |                   |         |                     |                     |
	|----------------|-----------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 18:20:56
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 18:20:56.631224    2708 out.go:296] Setting OutFile to fd 756 ...
	I0629 18:20:56.699895    2708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:20:56.699895    2708 out.go:309] Setting ErrFile to fd 684...
	I0629 18:20:56.699895    2708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:20:56.723887    2708 out.go:303] Setting JSON to false
	I0629 18:20:56.726519    2708 start.go:115] hostinfo: {"hostname":"minikube8","uptime":19419,"bootTime":1656507437,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 18:20:56.726519    2708 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 18:20:56.731366    2708 out.go:177] * [functional-20220629181245-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 18:20:56.740898    2708 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 18:20:56.743506    2708 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 18:20:56.747540    2708 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:20:56.749869    2708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:20:56.753197    2708 config.go:178] Loaded profile config "functional-20220629181245-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 18:20:56.755522    2708 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:20:59.491006    2708 docker.go:137] docker version: linux-20.10.16
	I0629 18:20:59.499120    2708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:21:01.593392    2708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0941095s)
	I0629 18:21:01.594134    2708 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-29 18:21:00.5383184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:21:01.602783    2708 out.go:177] * Using the docker driver based on existing profile
	I0629 18:21:01.604768    2708 start.go:284] selected driver: docker
	I0629 18:21:01.604972    2708 start.go:808] validating driver "docker" against &{Name:functional-20220629181245-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629181245-2408 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:21:01.604972    2708 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:21:01.619960    2708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:21:03.730073    2708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1101s)
	I0629 18:21:03.730073    2708 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-29 18:21:02.6700909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:21:03.782833    2708 cni.go:95] Creating CNI manager for ""
	I0629 18:21:03.782833    2708 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 18:21:03.782833    2708 start_flags.go:310] config:
	{Name:functional-20220629181245-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629181245-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:fal
se storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:21:03.793917    2708 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:13:39 UTC, end at Wed 2022-06-29 18:53:01 UTC. --
	Jun 29 18:17:48 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:48.945766100Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 1a68ab8e986055d9ab693ac55d5c7207f0b00da3209804278e2b708257684c72 01c1830a95dbe3e2a78e968510049b9989b2ace82a6ced9151741389f604930f], retrying...."
	Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.034518600Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.152076800Z" level=info msg="Loading containers: done."
	Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.219808200Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.219994600Z" level=info msg="Daemon has completed initialization"
	Jun 29 18:17:49 functional-20220629181245-2408 systemd[1]: Started Docker Application Container Engine.
	Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.279743100Z" level=info msg="API listen on [::]:2376"
	Jun 29 18:17:49 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:49.288555900Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 29 18:17:53 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:53.971619800Z" level=info msg="ignoring event" container=8f862c41e32a1e21645ee4c2db1d64a064d9a03762b23c242f679af9e93b40cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.574576300Z" level=info msg="ignoring event" container=5b9368e46d90e8fb14de6b75aedb4d217ad8cf517fa4b5ef651d6f31a0e8e2d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.773474800Z" level=info msg="ignoring event" container=6c9072f796a5a708f84743a6314b3fcb40d0afa747f5e5d6fbb3b9a78d3f79bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.774108100Z" level=info msg="ignoring event" container=289f59e187075b0c810ef72d68ee48eb24e87feda6a4a6edb13d3d16af6649c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.774632200Z" level=info msg="ignoring event" container=3b70cc5d916be588602612cdff754fc3e4042e3c5bc3968261ac6dde3c5b7b0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.774944900Z" level=info msg="ignoring event" container=b08c43c862782125e7d28ebf26c863e3abb878fd29832eef710d0b771b93f1bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.777080200Z" level=info msg="ignoring event" container=a28b12885d0afb1e744df3ece228ce9b49a309a0b8db289f0061d443806d54e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.872019200Z" level=info msg="ignoring event" container=ad13f7d80fedf872cf420e2d6a618e3a29b7e66611eafb84a9c32bfd9bab5acd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.872120900Z" level=info msg="ignoring event" container=6091ef84ba649b00e6db52f5d1b8952b75a3c0b3992bf8adb80895973cff8cac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:58 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.978529700Z" level=info msg="ignoring event" container=2a1307e43ecd4e53245717b4da9456c40b8aedca6c604a9a0e20cb5c45847262 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:17:59 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:17:58.984580300Z" level=info msg="ignoring event" container=8a221bc76ab3fce407a351706ce1ac0b37f825a9888f573170ccd58e112a1464 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:18:03 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:18:03.598832300Z" level=info msg="ignoring event" container=fa1705cd6426848b08c61ffb695cc71b61cabb2c4713f1a00a3c43c110412250 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:18:16 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:18:16.803467300Z" level=info msg="ignoring event" container=aa917099ec7c6a97dd3e3e8aacfdba3ef4364a3956197909fdf96d9fc653533b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:19:47 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:19:47.493092900Z" level=info msg="ignoring event" container=8eb2d8391ccfaea6ca7c6f63d9fec05f02003fa857df481a8c4f055337d64693 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:19:48 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:19:48.371800000Z" level=info msg="ignoring event" container=49c0a39c294942be8e71067ced35dd5b512aba2a554db28270ed23817fd73bb3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:23:02 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:23:02.872319200Z" level=info msg="ignoring event" container=ff29731e5d65b82348c0b0d5cb902a5d72cf8c69b8e51a1ad887c2d9c0cb1202 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:23:03 functional-20220629181245-2408 dockerd[8808]: time="2022-06-29T18:23:03.560444400Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	598bfc8199ffb       mysql@sha256:8b4b41d530c40d77a3205c53f7ecf1026d735648d9a09777845f305953e5eff5                   30 minutes ago      Running             mysql                     0                   9fbc6aee81bd1
	a38ab82ec10c4       82e4c8a736a4f                                                                                   32 minutes ago      Running             echoserver                0                   af2618c2f2906
	9b0480c6c94e7       nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea                   33 minutes ago      Running             myfrontend                0                   19704fb3610eb
	9c2d58554a4c2       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   33 minutes ago      Running             echoserver                0                   f3790d8f70348
	254d7e77ab7b5       nginx@sha256:8e38930f0390cbd79b2d1528405fb17edcda5f4a30875ecf338ebaa598dc994e                   33 minutes ago      Running             nginx                     0                   7b769a4d97e84
	22c1c4f4f76fa       6e38f40d628db                                                                                   34 minutes ago      Running             storage-provisioner       4                   635aa34b11d4a
	249f22d600739       a634548d10b03                                                                                   34 minutes ago      Running             kube-proxy                4                   a9f396c4a971d
	d562c25fbc4e8       a4ca41631cc7a                                                                                   34 minutes ago      Running             coredns                   3                   6b2242aa8182f
	3df956d56f8ad       d3377ffb7177c                                                                                   34 minutes ago      Running             kube-apiserver            0                   bf61a429495f6
	5b56e5f429943       aebe758cef4cd                                                                                   34 minutes ago      Running             etcd                      4                   e8beefefd6c28
	b45d5ee5a7f3e       34cdf99b1bb3b                                                                                   34 minutes ago      Running             kube-controller-manager   3                   b5d261e3b379d
	d24ee2d8aa4bc       5d725196c1f47                                                                                   34 minutes ago      Running             kube-scheduler            4                   3a8892b491e09
	8a221bc76ab3f       aebe758cef4cd                                                                                   35 minutes ago      Exited              etcd                      3                   5b9368e46d90e
	8f862c41e32a1       6e38f40d628db                                                                                   35 minutes ago      Exited              storage-provisioner       3                   b08c43c862782
	ad13f7d80fedf       a634548d10b03                                                                                   35 minutes ago      Exited              kube-proxy                3                   6c9072f796a5a
	2a1307e43ecd4       34cdf99b1bb3b                                                                                   35 minutes ago      Exited              kube-controller-manager   2                   a28b12885d0af
	fa1705cd64268       a4ca41631cc7a                                                                                   35 minutes ago      Exited              coredns                   2                   6091ef84ba649
	ede947b24c2c7       5d725196c1f47                                                                                   35 minutes ago      Exited              kube-scheduler            3                   7d40673c18eb7
	
	* 
	* ==> coredns [d562c25fbc4e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [fa1705cd6426] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220629181245-2408
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220629181245-2408
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=functional-20220629181245-2408
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T18_14_33_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 18:14:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220629181245-2408
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 18:52:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 18:48:49 +0000   Wed, 29 Jun 2022 18:14:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 18:48:49 +0000   Wed, 29 Jun 2022 18:14:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 18:48:49 +0000   Wed, 29 Jun 2022 18:14:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 18:48:49 +0000   Wed, 29 Jun 2022 18:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220629181245-2408
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                bbe1e1cef6e940328962dca52b3c5731
	  Boot ID:                    3343ff08-5090-4fcc-990d-809e76a24666
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54c4b5c49f-7pm4f                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  default                     hello-node-connect-578cdc45cb-m2pgx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  default                     mysql-67f7d69d8b-b2279                                    600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     31m
	  default                     nginx-svc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  default                     sp-pod                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  kube-system                 coredns-6d4b75cb6d-8wtrf                                  100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-20220629181245-2408                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-20220629181245-2408             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  kube-system                 kube-controller-manager-functional-20220629181245-2408    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-xnr8l                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-20220629181245-2408             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 34m                kube-proxy       
	  Normal  Starting                 37m                kube-proxy       
	  Normal  Starting                 38m                kube-proxy       
	  Normal  NodeHasSufficientMemory  38m (x6 over 38m)  kubelet          Node functional-20220629181245-2408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38m (x6 over 38m)  kubelet          Node functional-20220629181245-2408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m (x6 over 38m)  kubelet          Node functional-20220629181245-2408 status is now: NodeHasSufficientPID
	  Normal  Starting                 38m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38m                kubelet          Node functional-20220629181245-2408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38m                kubelet          Node functional-20220629181245-2408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet          Node functional-20220629181245-2408 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                38m                kubelet          Node functional-20220629181245-2408 status is now: NodeReady
	  Normal  RegisteredNode           38m                node-controller  Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller
	  Normal  RegisteredNode           36m                node-controller  Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller
	  Normal  Starting                 34m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34m (x8 over 34m)  kubelet          Node functional-20220629181245-2408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34m (x8 over 34m)  kubelet          Node functional-20220629181245-2408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34m (x7 over 34m)  kubelet          Node functional-20220629181245-2408 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34m                node-controller  Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun29 18:28] WSL2: Performing memory compaction.
	[Jun29 18:29] WSL2: Performing memory compaction.
	[Jun29 18:30] WSL2: Performing memory compaction.
	[Jun29 18:31] WSL2: Performing memory compaction.
	[Jun29 18:32] WSL2: Performing memory compaction.
	[Jun29 18:33] WSL2: Performing memory compaction.
	[Jun29 18:34] WSL2: Performing memory compaction.
	[Jun29 18:35] WSL2: Performing memory compaction.
	[Jun29 18:36] WSL2: Performing memory compaction.
	[Jun29 18:37] WSL2: Performing memory compaction.
	[Jun29 18:38] WSL2: Performing memory compaction.
	[Jun29 18:39] WSL2: Performing memory compaction.
	[Jun29 18:40] WSL2: Performing memory compaction.
	[Jun29 18:41] WSL2: Performing memory compaction.
	[Jun29 18:42] WSL2: Performing memory compaction.
	[Jun29 18:43] WSL2: Performing memory compaction.
	[Jun29 18:44] WSL2: Performing memory compaction.
	[Jun29 18:45] WSL2: Performing memory compaction.
	[Jun29 18:46] WSL2: Performing memory compaction.
	[Jun29 18:47] WSL2: Performing memory compaction.
	[Jun29 18:48] WSL2: Performing memory compaction.
	[Jun29 18:49] WSL2: Performing memory compaction.
	[Jun29 18:50] WSL2: Performing memory compaction.
	[Jun29 18:51] WSL2: Performing memory compaction.
	[Jun29 18:52] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [5b56e5f42994] <==
	* {"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:37.484Z","time spent":"1.1514841s","remote":"127.0.0.1:41498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1157,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:37.967Z","time spent":"668.713ms","remote":"127.0.0.1:41484","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.722173s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13595"}
	{"level":"info","ts":"2022-06-29T18:22:38.636Z","caller":"traceutil/trace.go:171","msg":"trace[1090498900] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:977; }","duration":"1.7222338s","start":"2022-06-29T18:22:36.914Z","end":"2022-06-29T18:22:38.636Z","steps":["trace[1090498900] 'range keys from in-memory index tree'  (duration: 1.721868s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:36.914Z","time spent":"1.7222991s","remote":"127.0.0.1:41502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13619,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.3621525s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T18:22:38.636Z","caller":"traceutil/trace.go:171","msg":"trace[1325742955] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:977; }","duration":"1.3628992s","start":"2022-06-29T18:22:37.273Z","end":"2022-06-29T18:22:38.636Z","steps":["trace[1325742955] 'range keys from in-memory index tree'  (duration: 1.3620724s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T18:22:38.636Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.1559039s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T18:22:38.637Z","caller":"traceutil/trace.go:171","msg":"trace[1211758340] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:977; }","duration":"1.1568057s","start":"2022-06-29T18:22:37.480Z","end":"2022-06-29T18:22:38.637Z","steps":["trace[1211758340] 'range keys from in-memory index tree'  (duration: 1.1557784s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T18:22:38.637Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:22:37.480Z","time spent":"1.1568767s","remote":"127.0.0.1:41522","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-06-29T18:28:11.575Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1009}
	{"level":"info","ts":"2022-06-29T18:28:11.669Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1009,"took":"93.0258ms"}
	{"level":"info","ts":"2022-06-29T18:29:25.773Z","caller":"traceutil/trace.go:171","msg":"trace[953015035] linearizableReadLoop","detail":"{readStateIndex:1473; appliedIndex:1473; }","duration":"197.3427ms","start":"2022-06-29T18:29:25.575Z","end":"2022-06-29T18:29:25.773Z","steps":["trace[953015035] 'read index received'  (duration: 197.329ms)","trace[953015035] 'applied index is now lower than readState.Index'  (duration: 9.3µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T18:29:25.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"209.4074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T18:29:25.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.6091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T18:29:25.785Z","caller":"traceutil/trace.go:171","msg":"trace[1809912113] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1270; }","duration":"209.5852ms","start":"2022-06-29T18:29:25.575Z","end":"2022-06-29T18:29:25.785Z","steps":["trace[1809912113] 'agreement among raft nodes before linearized reading'  (duration: 197.4971ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T18:29:25.785Z","caller":"traceutil/trace.go:171","msg":"trace[1900282098] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1270; }","duration":"110.6865ms","start":"2022-06-29T18:29:25.674Z","end":"2022-06-29T18:29:25.785Z","steps":["trace[1900282098] 'agreement among raft nodes before linearized reading'  (duration: 98.6382ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T18:33:11.601Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1219}
	{"level":"info","ts":"2022-06-29T18:33:11.602Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1219,"took":"771.4µs"}
	{"level":"info","ts":"2022-06-29T18:38:11.619Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1428}
	{"level":"info","ts":"2022-06-29T18:38:11.620Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1428,"took":"512.2µs"}
	{"level":"info","ts":"2022-06-29T18:43:11.634Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1637}
	{"level":"info","ts":"2022-06-29T18:43:11.636Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1637,"took":"825.5µs"}
	{"level":"info","ts":"2022-06-29T18:48:11.649Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1848}
	{"level":"info","ts":"2022-06-29T18:48:11.650Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1848,"took":"658.3µs"}
	
	* 
	* ==> etcd [8a221bc76ab3] <==
	* {"level":"info","ts":"2022-06-29T18:17:55.473Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T18:17:55.474Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-29T18:17:55.474Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2022-06-29T18:17:56.584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-06-29T18:17:56.588Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220629181245-2408 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T18:17:56.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:17:56.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:17:56.594Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T18:17:56.592Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T18:17:56.595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T18:17:56.596Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-29T18:17:58.575Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T18:17:58.575Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-20220629181245-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/29 18:17:58 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 18:17:58 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T18:17:58.579Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-29T18:17:58.683Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-29T18:17:58.685Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-29T18:17:58.685Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-20220629181245-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:53:01 up  1:00,  0 users,  load average: 1.34, 0.59, 0.60
	Linux functional-20220629181245-2408 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3df956d56f8a] <==
	* I0629 18:18:36.765763       1 controller.go:611] quota admission added evaluator for: endpoints
	I0629 18:18:54.805309       1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.101.22.4]
	I0629 18:18:54.889923       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0629 18:19:27.091943       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 18:19:27.876891       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.104.31.193]
	I0629 18:20:02.770497       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.107.130.255]
	I0629 18:21:56.775484       1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.99.3.49]
	I0629 18:22:30.912893       1 trace.go:205] Trace[464298855]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:94e1f486-30ee-403d-9c75-8e2dbd5b6305,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:30.272) (total time: 640ms):
	Trace[464298855]: ---"About to write a response" 639ms (18:22:30.912)
	Trace[464298855]: [640.0547ms] [640.0547ms] END
	I0629 18:22:30.913261       1 trace.go:205] Trace[1639686775]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (29-Jun-2022 18:22:29.902) (total time: 1010ms):
	Trace[1639686775]: [1.0104576s] [1.0104576s] END
	I0629 18:22:30.913734       1 trace.go:205] Trace[377805006]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:70b46666-5bea-4db2-b276-95178f0c4918,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:29.902) (total time: 1010ms):
	Trace[377805006]: ---"Listing from storage done" 1010ms (18:22:30.913)
	Trace[377805006]: [1.0109662s] [1.0109662s] END
	I0629 18:22:38.638406       1 trace.go:205] Trace[1742180193]: "List(recursive=true) etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (29-Jun-2022 18:22:36.913) (total time: 1725ms):
	Trace[1742180193]: [1.7250993s] [1.7250993s] END
	I0629 18:22:38.638408       1 trace.go:205] Trace[1455204933]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:a77fbf62-ebcf-4aed-bfa4-a29e26785ca1,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:37.483) (total time: 1154ms):
	Trace[1455204933]: ---"About to write a response" 1153ms (18:22:38.637)
	Trace[1455204933]: [1.1541912s] [1.1541912s] END
	I0629 18:22:38.639614       1 trace.go:205] Trace[1554511105]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:b5dc64e7-cb2d-49c9-bbb2-e93dc85cf6f9,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (29-Jun-2022 18:22:36.913) (total time: 1726ms):
	Trace[1554511105]: ---"Listing from storage done" 1725ms (18:22:38.638)
	Trace[1554511105]: [1.7263635s] [1.7263635s] END
	W0629 18:35:38.429939       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0629 18:44:05.619131       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	
	* 
	* ==> kube-controller-manager [2a1307e43ecd] <==
	* I0629 18:17:55.310243       1 serving.go:348] Generated self-signed cert in-memory
	I0629 18:17:56.700986       1 controllermanager.go:180] Version: v1.24.2
	I0629 18:17:56.701122       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:17:56.703797       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0629 18:17:56.703940       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0629 18:17:56.703997       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0629 18:17:56.704110       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [b45d5ee5a7f3] <==
	* I0629 18:18:32.469978       1 shared_informer.go:262] Caches are synced for PVC protection
	I0629 18:18:32.470023       1 shared_informer.go:262] Caches are synced for GC
	I0629 18:18:32.470100       1 shared_informer.go:262] Caches are synced for attach detach
	I0629 18:18:32.470143       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0629 18:18:32.469913       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0629 18:18:32.470260       1 event.go:294] "Event occurred" object="functional-20220629181245-2408" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220629181245-2408 event: Registered Node functional-20220629181245-2408 in Controller"
	I0629 18:18:32.470371       1 shared_informer.go:262] Caches are synced for job
	I0629 18:18:32.470383       1 shared_informer.go:262] Caches are synced for HPA
	I0629 18:18:32.470106       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0629 18:18:32.470926       1 shared_informer.go:262] Caches are synced for deployment
	I0629 18:18:32.471767       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0629 18:18:32.476069       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0629 18:18:32.485317       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 18:18:32.572262       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 18:18:33.069023       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 18:18:33.069069       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 18:18:33.069198       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 18:19:14.069008       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0629 18:19:14.069123       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0629 18:19:27.202522       1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-578cdc45cb to 1"
	I0629 18:19:27.473188       1 event.go:294] "Event occurred" object="default/hello-node-connect-578cdc45cb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-578cdc45cb-m2pgx"
	I0629 18:20:02.412242       1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54c4b5c49f to 1"
	I0629 18:20:02.490404       1 event.go:294] "Event occurred" object="default/hello-node-54c4b5c49f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54c4b5c49f-7pm4f"
	I0629 18:21:56.811786       1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-67f7d69d8b to 1"
	I0629 18:21:56.973069       1 event.go:294] "Event occurred" object="default/mysql-67f7d69d8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-67f7d69d8b-b2279"
	
	* 
	* ==> kube-proxy [249f22d60073] <==
	* I0629 18:18:19.390561       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 18:18:19.393001       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 18:18:19.395414       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 18:18:19.397973       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 18:18:19.401391       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 18:18:19.481814       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0629 18:18:19.481936       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0629 18:18:19.482088       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 18:18:19.678168       1 server_others.go:206] "Using iptables Proxier"
	I0629 18:18:19.678365       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 18:18:19.678382       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 18:18:19.678397       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 18:18:19.678424       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:18:19.678843       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:18:19.681793       1 server.go:661] "Version info" version="v1.24.2"
	I0629 18:18:19.681917       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:18:19.686912       1 config.go:226] "Starting endpoint slice config controller"
	I0629 18:18:19.687050       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 18:18:19.687100       1 config.go:444] "Starting node config controller"
	I0629 18:18:19.687123       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 18:18:19.687113       1 config.go:317] "Starting service config controller"
	I0629 18:18:19.687174       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 18:18:19.869002       1 shared_informer.go:262] Caches are synced for node config
	I0629 18:18:19.869105       1 shared_informer.go:262] Caches are synced for service config
	I0629 18:18:19.869126       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [ad13f7d80fed] <==
	* E0629 18:17:53.802512       1 proxier.go:657] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0629 18:17:53.873341       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 18:17:53.881080       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 18:17:53.883649       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 18:17:53.886482       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 18:17:53.889538       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0629 18:17:53.893344       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220629181245-2408": dial tcp 192.168.49.2:8441: connect: connection refused
	E0629 18:17:54.968525       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220629181245-2408": dial tcp 192.168.49.2:8441: connect: connection refused
	E0629 18:17:57.263184       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220629181245-2408": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-scheduler [d24ee2d8aa4b] <==
	* I0629 18:18:10.683371       1 serving.go:348] Generated self-signed cert in-memory
	W0629 18:18:16.069048       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 18:18:16.069207       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 18:18:16.069231       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 18:18:16.069248       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 18:18:16.180359       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 18:18:16.180482       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:18:16.183071       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 18:18:16.183162       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 18:18:16.183166       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 18:18:16.183228       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 18:18:16.369625       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ede947b24c2c] <==
	* I0629 18:17:36.029632       1 serving.go:348] Generated self-signed cert in-memory
	W0629 18:17:47.208677       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0629 18:17:47.208830       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 18:17:47.208843       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:13:39 UTC, end at Wed 2022-06-29 18:53:02 UTC. --
	Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.676756   10903 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80abb961-e8fb-4b10-888e-07041c970642-pvc-11ec5ef3-696c-483c-b90d-92cc296733a8" (OuterVolumeSpecName: "mypd") pod "80abb961-e8fb-4b10-888e-07041c970642" (UID: "80abb961-e8fb-4b10-888e-07041c970642"). InnerVolumeSpecName "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.676736   10903 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8qnv\" (UniqueName: \"kubernetes.io/projected/80abb961-e8fb-4b10-888e-07041c970642-kube-api-access-t8qnv\") pod \"80abb961-e8fb-4b10-888e-07041c970642\" (UID: \"80abb961-e8fb-4b10-888e-07041c970642\") "
	Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.676993   10903 reconciler.go:312] "Volume detached for volume \"pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\" (UniqueName: \"kubernetes.io/host-path/80abb961-e8fb-4b10-888e-07041c970642-pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\") on node \"functional-20220629181245-2408\" DevicePath \"\""
	Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.680827   10903 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80abb961-e8fb-4b10-888e-07041c970642-kube-api-access-t8qnv" (OuterVolumeSpecName: "kube-api-access-t8qnv") pod "80abb961-e8fb-4b10-888e-07041c970642" (UID: "80abb961-e8fb-4b10-888e-07041c970642"). InnerVolumeSpecName "kube-api-access-t8qnv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 29 18:19:49 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:49.778415   10903 reconciler.go:312] "Volume detached for volume \"kube-api-access-t8qnv\" (UniqueName: \"kubernetes.io/projected/80abb961-e8fb-4b10-888e-07041c970642-kube-api-access-t8qnv\") on node \"functional-20220629181245-2408\" DevicePath \"\""
	Jun 29 18:19:50 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:50.575370   10903 scope.go:110] "RemoveContainer" containerID="8eb2d8391ccfaea6ca7c6f63d9fec05f02003fa857df481a8c4f055337d64693"
	Jun 29 18:19:51 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:51.895601   10903 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:19:51 functional-20220629181245-2408 kubelet[10903]: E0629 18:19:51.895819   10903 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="80abb961-e8fb-4b10-888e-07041c970642" containerName="myfrontend"
	Jun 29 18:19:51 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:51.895890   10903 memory_manager.go:345] "RemoveStaleState removing state" podUID="80abb961-e8fb-4b10-888e-07041c970642" containerName="myfrontend"
	Jun 29 18:19:52 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:52.079847   10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2mn\" (UniqueName: \"kubernetes.io/projected/a43b54bd-e9b2-405a-a1ce-fe921f7596f3-kube-api-access-7z2mn\") pod \"sp-pod\" (UID: \"a43b54bd-e9b2-405a-a1ce-fe921f7596f3\") " pod="default/sp-pod"
	Jun 29 18:19:52 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:52.079978   10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\" (UniqueName: \"kubernetes.io/host-path/a43b54bd-e9b2-405a-a1ce-fe921f7596f3-pvc-11ec5ef3-696c-483c-b90d-92cc296733a8\") pod \"sp-pod\" (UID: \"a43b54bd-e9b2-405a-a1ce-fe921f7596f3\") " pod="default/sp-pod"
	Jun 29 18:19:52 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:52.878559   10903 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=80abb961-e8fb-4b10-888e-07041c970642 path="/var/lib/kubelet/pods/80abb961-e8fb-4b10-888e-07041c970642/volumes"
	Jun 29 18:19:55 functional-20220629181245-2408 kubelet[10903]: I0629 18:19:55.071809   10903 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="19704fb3610ebb1d7f68500b2e535cea07de6bce78eb409dc44bc82eae459972"
	Jun 29 18:20:02 functional-20220629181245-2408 kubelet[10903]: I0629 18:20:02.497984   10903 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:20:02 functional-20220629181245-2408 kubelet[10903]: I0629 18:20:02.675414   10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ncwx\" (UniqueName: \"kubernetes.io/projected/5a35bec7-0a31-421a-98b3-2ac8fb3946dc-kube-api-access-4ncwx\") pod \"hello-node-54c4b5c49f-7pm4f\" (UID: \"5a35bec7-0a31-421a-98b3-2ac8fb3946dc\") " pod="default/hello-node-54c4b5c49f-7pm4f"
	Jun 29 18:20:03 functional-20220629181245-2408 kubelet[10903]: I0629 18:20:03.871063   10903 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="af2618c2f2906803e51dd58eb89c449934092ee26e5e5273aa984271d1540fb2"
	Jun 29 18:21:56 functional-20220629181245-2408 kubelet[10903]: I0629 18:21:56.987076   10903 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:21:57 functional-20220629181245-2408 kubelet[10903]: I0629 18:21:57.185764   10903 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvxbz\" (UniqueName: \"kubernetes.io/projected/f0829689-622a-4c53-84e4-00e90b285721-kube-api-access-kvxbz\") pod \"mysql-67f7d69d8b-b2279\" (UID: \"f0829689-622a-4c53-84e4-00e90b285721\") " pod="default/mysql-67f7d69d8b-b2279"
	Jun 29 18:21:58 functional-20220629181245-2408 kubelet[10903]: I0629 18:21:58.505149   10903 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9fbc6aee81bd15ea5d08a8220b497271df5007196f4d0694c2950cc3cf212360"
	Jun 29 18:23:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:23:06.905441   10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 29 18:28:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:28:06.903452   10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 29 18:33:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:33:06.906539   10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 29 18:38:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:38:06.909359   10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 29 18:43:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:43:06.911631   10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 29 18:48:06 functional-20220629181245-2408 kubelet[10903]: W0629 18:48:06.925865   10903 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [22c1c4f4f76f] <==
	* I0629 18:18:18.682442       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 18:18:18.975411       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 18:18:18.975763       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 18:18:36.772247       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 18:18:36.772547       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a577dbe5-0f30-49b2-a23b-cd68951ef24b", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220629181245-2408_1960d0b0-ba6b-4070-a7d1-a6b35801c0e5 became leader
	I0629 18:18:36.772629       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220629181245-2408_1960d0b0-ba6b-4070-a7d1-a6b35801c0e5!
	I0629 18:18:36.873595       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220629181245-2408_1960d0b0-ba6b-4070-a7d1-a6b35801c0e5!
	I0629 18:19:14.069603       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0629 18:19:14.070027       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2cbd621b-8004-4a3e-a6c5-a293bc58b2f3 386 0 2022-06-29 18:14:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-06-29 18:14:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-11ec5ef3-696c-483c-b90d-92cc296733a8 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  11ec5ef3-696c-483c-b90d-92cc296733a8 730 0 2022-06-29 18:19:13 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-06-29 18:19:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-06-29 18:19:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0629 18:19:14.071126       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8" provisioned
	I0629 18:19:14.071294       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0629 18:19:14.071308       1 volume_store.go:212] Trying to save persistentvolume "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8"
	I0629 18:19:14.071549       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"11ec5ef3-696c-483c-b90d-92cc296733a8", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0629 18:19:14.185384       1 volume_store.go:219] persistentvolume "pvc-11ec5ef3-696c-483c-b90d-92cc296733a8" saved
	I0629 18:19:14.270938       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"11ec5ef3-696c-483c-b90d-92cc296733a8", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-11ec5ef3-696c-483c-b90d-92cc296733a8
	
	* 
	* ==> storage-provisioner [8f862c41e32a] <==
	* I0629 18:17:53.689347       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0629 18:17:53.784672       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220629181245-2408 -n functional-20220629181245-2408
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220629181245-2408 -n functional-20220629181245-2408: (6.7803019s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220629181245-2408 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220629181245-2408 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 describe pod : exit status 1 (183.4325ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20220629181245-2408 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (1987.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (181.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0629 18:20:17.099874    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:180: nginx-svc svc.status.loadBalancer.ingress never got an IP: timed out waiting for the condition
functional_test_tunnel_test.go:181: (dbg) Run:  kubectl --context functional-20220629181245-2408 get svc nginx-svc
functional_test_tunnel_test.go:185: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.101.22.4   <pending>     80:32163/TCP   3m18s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (181.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220629191914-2408
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220629191914-2408
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20220629191914-2408: (39.1726933s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408 --wait=true -v=8 --alsologtostderr
E0629 19:33:20.304272    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:33:54.264255    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 19:34:57.966445    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:35:17.123093    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:36:54.780113    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408 --wait=true -v=8 --alsologtostderr: exit status 80 (4m6.1884773s)

                                                
                                                
-- stdout --
	* [multinode-20220629191914-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220629191914-2408 in cluster multinode-20220629191914-2408
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220629191914-2408" ...
	* Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-20220629191914-2408-m02 in cluster multinode-20220629191914-2408
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220629191914-2408-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	  - no_proxy=192.168.58.2
	* Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 19:32:51.363731    6596 out.go:296] Setting OutFile to fd 1008 ...
	I0629 19:32:51.420577    6596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:32:51.420577    6596 out.go:309] Setting ErrFile to fd 568...
	I0629 19:32:51.420651    6596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:32:51.441230    6596 out.go:303] Setting JSON to false
	I0629 19:32:51.442731    6596 start.go:115] hostinfo: {"hostname":"minikube8","uptime":23733,"bootTime":1656507438,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 19:32:51.443741    6596 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 19:32:51.448090    6596 out.go:177] * [multinode-20220629191914-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 19:32:51.451318    6596 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:32:51.451115    6596 notify.go:193] Checking for updates...
	I0629 19:32:51.455322    6596 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 19:32:51.458212    6596 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 19:32:51.460283    6596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 19:32:51.463282    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:32:51.463282    6596 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 19:32:54.624687    6596 docker.go:137] docker version: linux-20.10.16
	I0629 19:32:54.634957    6596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 19:32:56.682184    6596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0472133s)
	I0629 19:32:56.682184    6596 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:52 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-29 19:32:55.6605854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 19:32:56.687470    6596 out.go:177] * Using the docker driver based on existing profile
	I0629 19:32:56.690727    6596 start.go:284] selected driver: docker
	I0629 19:32:56.690727    6596 start.go:808] validating driver "docker" against &{Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:
false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:32:56.690727    6596 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 19:32:56.703181    6596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 19:32:58.765755    6596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0624485s)
	I0629 19:32:58.766026    6596 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:52 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-29 19:32:57.7553919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 19:32:58.873644    6596 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 19:32:58.873644    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:32:58.873644    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:32:58.873644    6596 start_flags.go:310] config:
	{Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer
:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:32:58.877509    6596 out.go:177] * Starting control plane node multinode-20220629191914-2408 in cluster multinode-20220629191914-2408
	I0629 19:32:58.883618    6596 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 19:32:58.886166    6596 out.go:177] * Pulling base image ...
	I0629 19:32:58.888791    6596 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 19:32:58.888791    6596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 19:32:58.889635    6596 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 19:32:58.889635    6596 cache.go:57] Caching tarball of preloaded images
	I0629 19:32:58.889971    6596 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 19:32:58.889971    6596 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 19:32:58.889971    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:32:59.988163    6596 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 19:32:59.988236    6596 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 19:32:59.988236    6596 cache.go:208] Successfully downloaded all kic artifacts
	I0629 19:32:59.988396    6596 start.go:352] acquiring machines lock for multinode-20220629191914-2408: {Name:mk34f398a922278a637dbc30fba078e459217922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 19:32:59.988668    6596 start.go:356] acquired machines lock for "multinode-20220629191914-2408" in 162.2µs
	I0629 19:32:59.988822    6596 start.go:94] Skipping create...Using existing machine configuration
	I0629 19:32:59.988890    6596 fix.go:55] fixHost starting: 
	I0629 19:33:00.002328    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:01.101332    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.0989966s)
	I0629 19:33:01.101332    6596 fix.go:103] recreateIfNeeded on multinode-20220629191914-2408: state=Stopped err=<nil>
	W0629 19:33:01.101332    6596 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 19:33:01.113482    6596 out.go:177] * Restarting existing docker container for "multinode-20220629191914-2408" ...
	I0629 19:33:01.122532    6596 cli_runner.go:164] Run: docker start multinode-20220629191914-2408
	I0629 19:33:03.191406    6596 cli_runner.go:217] Completed: docker start multinode-20220629191914-2408: (2.0687984s)
	I0629 19:33:03.199634    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:04.353236    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.1533981s)
	I0629 19:33:04.353307    6596 kic.go:416] container "multinode-20220629191914-2408" state is running.
	I0629 19:33:04.363125    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408
	I0629 19:33:05.592231    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408: (1.2289949s)
	I0629 19:33:05.592417    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:33:05.595170    6596 machine.go:88] provisioning docker machine ...
	I0629 19:33:05.595170    6596 ubuntu.go:169] provisioning hostname "multinode-20220629191914-2408"
	I0629 19:33:05.605073    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:06.783003    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1779222s)
	I0629 19:33:06.786870    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:06.787712    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:06.787712    6596 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220629191914-2408 && echo "multinode-20220629191914-2408" | sudo tee /etc/hostname
	I0629 19:33:07.009622    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220629191914-2408
	
	I0629 19:33:07.021166    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:08.130977    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1096318s)
	I0629 19:33:08.142928    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:08.143386    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:08.143386    6596 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220629191914-2408' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220629191914-2408/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220629191914-2408' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 19:33:08.285161    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:33:08.285161    6596 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0629 19:33:08.285161    6596 ubuntu.go:177] setting up certificates
	I0629 19:33:08.285161    6596 provision.go:83] configureAuth start
	I0629 19:33:08.293286    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408
	I0629 19:33:09.404816    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408: (1.1112964s)
	I0629 19:33:09.404898    6596 provision.go:138] copyHostCerts
	I0629 19:33:09.405079    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem
	I0629 19:33:09.405389    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0629 19:33:09.405420    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0629 19:33:09.405893    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0629 19:33:09.406762    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem
	I0629 19:33:09.406762    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0629 19:33:09.406762    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0629 19:33:09.407455    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0629 19:33:09.408234    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem
	I0629 19:33:09.408511    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0629 19:33:09.408552    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0629 19:33:09.408851    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0629 19:33:09.409423    6596 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-20220629191914-2408 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220629191914-2408]
	I0629 19:33:09.921504    6596 provision.go:172] copyRemoteCerts
	I0629 19:33:09.930872    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 19:33:09.937589    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:11.085747    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1480323s)
	I0629 19:33:11.086873    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:11.241344    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3104212s)
	I0629 19:33:11.241438    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0629 19:33:11.241761    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 19:33:11.298284    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0629 19:33:11.299659    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	I0629 19:33:11.348835    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0629 19:33:11.349294    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 19:33:11.409618    6596 provision.go:86] duration metric: configureAuth took 3.124436s
	I0629 19:33:11.409618    6596 ubuntu.go:193] setting minikube options for container-runtime
	I0629 19:33:11.410452    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:33:11.418097    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:12.522305    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1039232s)
	I0629 19:33:12.525919    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:12.526621    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:12.526621    6596 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 19:33:12.730790    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 19:33:12.731891    6596 ubuntu.go:71] root file system type: overlay
	I0629 19:33:12.732387    6596 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 19:33:12.740400    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:13.834290    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.0936946s)
	I0629 19:33:13.838461    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:13.838818    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:13.838818    6596 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 19:33:14.082468    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 19:33:14.091908    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:15.185685    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.093769s)
	I0629 19:33:15.188054    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:15.188054    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:15.188054    6596 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 19:33:15.415528    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:33:15.415528    6596 machine.go:91] provisioned docker machine in 9.8202924s
	I0629 19:33:15.415528    6596 start.go:306] post-start starting for "multinode-20220629191914-2408" (driver="docker")
	I0629 19:33:15.415528    6596 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 19:33:15.426094    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 19:33:15.433565    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:16.555792    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1220767s)
	I0629 19:33:16.556471    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:16.703930    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2778275s)
	I0629 19:33:16.713914    6596 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 19:33:16.726829    6596 command_runner.go:130] > NAME="Ubuntu"
	I0629 19:33:16.726829    6596 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0629 19:33:16.726829    6596 command_runner.go:130] > ID=ubuntu
	I0629 19:33:16.726829    6596 command_runner.go:130] > ID_LIKE=debian
	I0629 19:33:16.726829    6596 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0629 19:33:16.726829    6596 command_runner.go:130] > VERSION_ID="20.04"
	I0629 19:33:16.726829    6596 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0629 19:33:16.726829    6596 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0629 19:33:16.726829    6596 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0629 19:33:16.726829    6596 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0629 19:33:16.726829    6596 command_runner.go:130] > VERSION_CODENAME=focal
	I0629 19:33:16.726829    6596 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0629 19:33:16.726829    6596 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 19:33:16.726829    6596 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 19:33:16.726829    6596 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 19:33:16.726829    6596 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 19:33:16.726829    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0629 19:33:16.727400    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0629 19:33:16.728093    6596 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> 24082.pem in /etc/ssl/certs
	I0629 19:33:16.728136    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /etc/ssl/certs/24082.pem
	I0629 19:33:16.738930    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 19:33:16.770570    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /etc/ssl/certs/24082.pem (1708 bytes)
	I0629 19:33:16.824387    6596 start.go:309] post-start completed in 1.4088489s
	I0629 19:33:16.834038    6596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 19:33:16.840648    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:17.928283    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.0876279s)
	I0629 19:33:17.928283    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:18.051017    6596 command_runner.go:130] > 5%
	I0629 19:33:18.051017    6596 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.216971s)
	I0629 19:33:18.060981    6596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 19:33:18.075047    6596 command_runner.go:130] > 227G
	I0629 19:33:18.075451    6596 fix.go:57] fixHost completed within 18.086477s
	I0629 19:33:18.075527    6596 start.go:81] releasing machines lock for "multinode-20220629191914-2408", held for 18.0867122s
	I0629 19:33:18.083577    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408
	I0629 19:33:19.181084    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408: (1.0974995s)
	I0629 19:33:19.183571    6596 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 19:33:19.191201    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:19.191987    6596 ssh_runner.go:195] Run: systemctl --version
	I0629 19:33:19.199288    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:20.304272    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1049764s)
	I0629 19:33:20.304801    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:20.327933    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1367243s)
	I0629 19:33:20.328587    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:20.430078    6596 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0629 19:33:20.430624    6596 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0629 19:33:20.430690    6596 ssh_runner.go:235] Completed: systemctl --version: (1.2386294s)
	I0629 19:33:20.441100    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 19:33:20.548145    6596 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0629 19:33:20.548145    6596 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0629 19:33:20.548145    6596 command_runner.go:130] > <H1>302 Moved</H1>
	I0629 19:33:20.548145    6596 command_runner.go:130] > The document has moved
	I0629 19:33:20.548145    6596 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0629 19:33:20.548145    6596 command_runner.go:130] > </BODY></HTML>
	I0629 19:33:20.548145    6596 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.364565s)
	I0629 19:33:20.548145    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0629 19:33:20.602145    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:33:20.764194    6596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 19:33:20.966620    6596 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > [Unit]
	I0629 19:33:21.024232    6596 command_runner.go:130] > Description=Docker Application Container Engine
	I0629 19:33:21.024232    6596 command_runner.go:130] > Documentation=https://docs.docker.com
	I0629 19:33:21.024232    6596 command_runner.go:130] > BindsTo=containerd.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > Wants=network-online.target
	I0629 19:33:21.024232    6596 command_runner.go:130] > Requires=docker.socket
	I0629 19:33:21.024232    6596 command_runner.go:130] > StartLimitBurst=3
	I0629 19:33:21.024232    6596 command_runner.go:130] > StartLimitIntervalSec=60
	I0629 19:33:21.024232    6596 command_runner.go:130] > [Service]
	I0629 19:33:21.024232    6596 command_runner.go:130] > Type=notify
	I0629 19:33:21.024232    6596 command_runner.go:130] > Restart=on-failure
	I0629 19:33:21.024232    6596 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0629 19:33:21.024232    6596 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0629 19:33:21.024232    6596 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0629 19:33:21.024232    6596 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0629 19:33:21.024232    6596 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0629 19:33:21.024232    6596 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0629 19:33:21.024232    6596 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0629 19:33:21.024232    6596 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0629 19:33:21.024232    6596 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0629 19:33:21.024232    6596 command_runner.go:130] > ExecStart=
	I0629 19:33:21.024232    6596 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0629 19:33:21.024232    6596 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0629 19:33:21.024232    6596 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0629 19:33:21.024232    6596 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0629 19:33:21.024232    6596 command_runner.go:130] > LimitNOFILE=infinity
	I0629 19:33:21.024775    6596 command_runner.go:130] > LimitNPROC=infinity
	I0629 19:33:21.024775    6596 command_runner.go:130] > LimitCORE=infinity
	I0629 19:33:21.024775    6596 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0629 19:33:21.024775    6596 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0629 19:33:21.024775    6596 command_runner.go:130] > TasksMax=infinity
	I0629 19:33:21.024840    6596 command_runner.go:130] > TimeoutStartSec=0
	I0629 19:33:21.024840    6596 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0629 19:33:21.024840    6596 command_runner.go:130] > Delegate=yes
	I0629 19:33:21.024840    6596 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0629 19:33:21.024840    6596 command_runner.go:130] > KillMode=process
	I0629 19:33:21.024840    6596 command_runner.go:130] > [Install]
	I0629 19:33:21.024840    6596 command_runner.go:130] > WantedBy=multi-user.target
	I0629 19:33:21.024840    6596 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 19:33:21.034586    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 19:33:21.063598    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 19:33:21.106554    6596 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:33:21.106554    6596 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:33:21.120154    6596 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 19:33:21.307523    6596 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 19:33:21.487486    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:33:21.656038    6596 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 19:33:22.563262    6596 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 19:33:22.718542    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:33:22.890057    6596 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 19:33:22.919227    6596 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 19:33:22.928911    6596 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 19:33:22.942408    6596 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0629 19:33:22.942408    6596 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0629 19:33:22.942408    6596 command_runner.go:130] > Device: d0h/208d	Inode: 104         Links: 1
	I0629 19:33:22.942408    6596 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0629 19:33:22.942408    6596 command_runner.go:130] > Access: 2022-06-29 19:33:20.778642000 +0000
	I0629 19:33:22.942408    6596 command_runner.go:130] > Modify: 2022-06-29 19:33:20.778642000 +0000
	I0629 19:33:22.942408    6596 command_runner.go:130] > Change: 2022-06-29 19:33:20.778642000 +0000
	I0629 19:33:22.942408    6596 command_runner.go:130] >  Birth: -
	I0629 19:33:22.942408    6596 start.go:468] Will wait 60s for crictl version
	I0629 19:33:22.952518    6596 ssh_runner.go:195] Run: sudo crictl version
	I0629 19:33:23.033726    6596 command_runner.go:130] > Version:  0.1.0
	I0629 19:33:23.033726    6596 command_runner.go:130] > RuntimeName:  docker
	I0629 19:33:23.033726    6596 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0629 19:33:23.033726    6596 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0629 19:33:23.034252    6596 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 19:33:23.042543    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:33:23.125025    6596 command_runner.go:130] > 20.10.17
	I0629 19:33:23.138882    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:33:23.222698    6596 command_runner.go:130] > 20.10.17
	I0629 19:33:23.227737    6596 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 19:33:23.240448    6596 cli_runner.go:164] Run: docker exec -t multinode-20220629191914-2408 dig +short host.docker.internal
	I0629 19:33:24.551451    6596 cli_runner.go:217] Completed: docker exec -t multinode-20220629191914-2408 dig +short host.docker.internal: (1.3109935s)
	I0629 19:33:24.551451    6596 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 19:33:24.561161    6596 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 19:33:24.571675    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:33:24.604042    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:25.690173    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.0861243s)
	I0629 19:33:25.690173    6596 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 19:33:25.698242    6596 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0629 19:33:25.775027    6596 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 19:33:25.775027    6596 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0629 19:33:25.775027    6596 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 19:33:25.775080    6596 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0629 19:33:25.775109    6596 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0629 19:33:25.775220    6596 docker.go:533] Images already preloaded, skipping extraction
	I0629 19:33:25.782846    6596 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 19:33:25.860779    6596 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 19:33:25.860779    6596 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.2
	I0629 19:33:25.860862    6596 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 19:33:25.860862    6596 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 19:33:25.860862    6596 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0629 19:33:25.860905    6596 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 19:33:25.860905    6596 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0629 19:33:25.860905    6596 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0629 19:33:25.860905    6596 cache_images.go:84] Images are preloaded, skipping loading
	I0629 19:33:25.868706    6596 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 19:33:26.035859    6596 command_runner.go:130] > cgroupfs
	I0629 19:33:26.042038    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:33:26.042038    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:33:26.042190    6596 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 19:33:26.042220    6596 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220629191914-2408 NodeName:multinode-20220629191914-2408 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 19:33:26.042425    6596 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220629191914-2408"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 19:33:26.042579    6596 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220629191914-2408 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 19:33:26.052994    6596 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 19:33:26.075724    6596 command_runner.go:130] > kubeadm
	I0629 19:33:26.075724    6596 command_runner.go:130] > kubectl
	I0629 19:33:26.075724    6596 command_runner.go:130] > kubelet
	I0629 19:33:26.078971    6596 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 19:33:26.090603    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 19:33:26.117892    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (491 bytes)
	I0629 19:33:26.159337    6596 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 19:33:26.195569    6596 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0629 19:33:26.243295    6596 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0629 19:33:26.263420    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:33:26.294986    6596 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408 for IP: 192.168.58.2
	I0629 19:33:26.295633    6596 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0629 19:33:26.296006    6596 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0629 19:33:26.296648    6596 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\client.key
	I0629 19:33:26.296803    6596 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.key.cee25041
	I0629 19:33:26.296803    6596 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.key
	I0629 19:33:26.296803    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0629 19:33:26.297337    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0629 19:33:26.298034    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0629 19:33:26.298180    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0629 19:33:26.298823    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem (1338 bytes)
	W0629 19:33:26.298823    6596 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408_empty.pem, impossibly tiny 0 bytes
	I0629 19:33:26.298823    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0629 19:33:26.299352    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0629 19:33:26.299672    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0629 19:33:26.299842    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0629 19:33:26.300340    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem (1708 bytes)
	I0629 19:33:26.300591    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:26.300794    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem -> /usr/share/ca-certificates/2408.pem
	I0629 19:33:26.300876    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /usr/share/ca-certificates/24082.pem
	I0629 19:33:26.301532    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 19:33:26.361120    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 19:33:26.412721    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 19:33:26.463053    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 19:33:26.517668    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 19:33:26.570170    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 19:33:26.631493    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 19:33:26.687125    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 19:33:26.742487    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 19:33:26.795580    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem --> /usr/share/ca-certificates/2408.pem (1338 bytes)
	I0629 19:33:26.852347    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /usr/share/ca-certificates/24082.pem (1708 bytes)
	I0629 19:33:26.899476    6596 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 19:33:26.955891    6596 ssh_runner.go:195] Run: openssl version
	I0629 19:33:26.971630    6596 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0629 19:33:26.982659    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 19:33:27.018590    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.032874    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.032874    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.043885    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.069292    6596 command_runner.go:130] > b5213941
	I0629 19:33:27.080220    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 19:33:27.120059    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2408.pem && ln -fs /usr/share/ca-certificates/2408.pem /etc/ssl/certs/2408.pem"
	I0629 19:33:27.159419    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.173270    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.173835    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.183792    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.202477    6596 command_runner.go:130] > 51391683
	I0629 19:33:27.215423    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2408.pem /etc/ssl/certs/51391683.0"
	I0629 19:33:27.252461    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24082.pem && ln -fs /usr/share/ca-certificates/24082.pem /etc/ssl/certs/24082.pem"
	I0629 19:33:27.298227    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.312945    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.313023    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.323612    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.342921    6596 command_runner.go:130] > 3ec20f2e
	I0629 19:33:27.352573    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24082.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 19:33:27.377959    6596 kubeadm.go:395] StartCluster: {Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:33:27.388125    6596 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 19:33:27.466070    6596 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 19:33:27.491492    6596 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0629 19:33:27.491492    6596 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0629 19:33:27.491492    6596 command_runner.go:130] > /var/lib/minikube/etcd:
	I0629 19:33:27.491492    6596 command_runner.go:130] > member
	I0629 19:33:27.491492    6596 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 19:33:27.491492    6596 kubeadm.go:626] restartCluster start
	I0629 19:33:27.507017    6596 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 19:33:27.531094    6596 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:27.539212    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:28.645686    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1064667s)
	I0629 19:33:28.646507    6596 kubeconfig.go:116] verify returned: extract IP: "multinode-20220629191914-2408" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:28.646507    6596 kubeconfig.go:127] "multinode-20220629191914-2408" context is missing from C:\Users\jenkins.minikube8\minikube-integration\kubeconfig - will repair!
	I0629 19:33:28.647397    6596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 19:33:28.656533    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:28.657186    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408/client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408/client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube/ca.crt", CertData:[]uint
8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:33:28.658610    6596 cert_rotation.go:137] Starting client certificate rotation controller
	I0629 19:33:28.667032    6596 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 19:33:28.691244    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:28.701602    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:28.731867    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:28.932345    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:28.942495    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:28.972338    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.134447    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.144207    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.170953    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.345951    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.355395    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.381607    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.537051    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.547209    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.580325    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.740694    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.750407    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.781736    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.931876    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.942159    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.977096    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.133451    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.143861    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.174803    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.334525    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.344893    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.372464    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.539432    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.548599    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.576521    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.737497    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.747771    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.775521    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.946471    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.956071    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.984296    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.138788    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.149395    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.185735    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.346158    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.356459    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.387909    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.532067    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.542887    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.569445    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.734625    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.745081    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.777869    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.777900    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.787744    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.817645    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.817755    6596 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 19:33:31.817755    6596 kubeadm.go:1092] stopping kube-system containers ...
	I0629 19:33:31.825601    6596 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 19:33:31.910287    6596 command_runner.go:130] > 8b3c86d0a1c5
	I0629 19:33:31.910345    6596 command_runner.go:130] > f0ca10825934
	I0629 19:33:31.910345    6596 command_runner.go:130] > 35d237e18d31
	I0629 19:33:31.910345    6596 command_runner.go:130] > fbf6b6b051d1
	I0629 19:33:31.910345    6596 command_runner.go:130] > 4a8fd7455c69
	I0629 19:33:31.910345    6596 command_runner.go:130] > a474d425b0e4
	I0629 19:33:31.910385    6596 command_runner.go:130] > 01dc6840c9af
	I0629 19:33:31.910385    6596 command_runner.go:130] > 677fc6b0f18a
	I0629 19:33:31.910385    6596 command_runner.go:130] > d7c2cbf71616
	I0629 19:33:31.910418    6596 command_runner.go:130] > 1da5e66d6e61
	I0629 19:33:31.910418    6596 command_runner.go:130] > 08172ec4cee1
	I0629 19:33:31.910418    6596 command_runner.go:130] > 72903587275b
	I0629 19:33:31.910418    6596 command_runner.go:130] > aafba86db102
	I0629 19:33:31.910418    6596 command_runner.go:130] > 2b45ac9da375
	I0629 19:33:31.910418    6596 command_runner.go:130] > 2bebeee868d5
	I0629 19:33:31.910418    6596 command_runner.go:130] > 0870274494db
	I0629 19:33:31.910418    6596 docker.go:434] Stopping containers: [8b3c86d0a1c5 f0ca10825934 35d237e18d31 fbf6b6b051d1 4a8fd7455c69 a474d425b0e4 01dc6840c9af 677fc6b0f18a d7c2cbf71616 1da5e66d6e61 08172ec4cee1 72903587275b aafba86db102 2b45ac9da375 2bebeee868d5 0870274494db]
	I0629 19:33:31.918751    6596 ssh_runner.go:195] Run: docker stop 8b3c86d0a1c5 f0ca10825934 35d237e18d31 fbf6b6b051d1 4a8fd7455c69 a474d425b0e4 01dc6840c9af 677fc6b0f18a d7c2cbf71616 1da5e66d6e61 08172ec4cee1 72903587275b aafba86db102 2b45ac9da375 2bebeee868d5 0870274494db
	I0629 19:33:31.994337    6596 command_runner.go:130] > 8b3c86d0a1c5
	I0629 19:33:31.994337    6596 command_runner.go:130] > f0ca10825934
	I0629 19:33:31.994337    6596 command_runner.go:130] > 35d237e18d31
	I0629 19:33:31.994337    6596 command_runner.go:130] > fbf6b6b051d1
	I0629 19:33:31.994337    6596 command_runner.go:130] > 4a8fd7455c69
	I0629 19:33:31.994337    6596 command_runner.go:130] > a474d425b0e4
	I0629 19:33:31.994337    6596 command_runner.go:130] > 01dc6840c9af
	I0629 19:33:31.994337    6596 command_runner.go:130] > 677fc6b0f18a
	I0629 19:33:31.994337    6596 command_runner.go:130] > d7c2cbf71616
	I0629 19:33:31.994337    6596 command_runner.go:130] > 1da5e66d6e61
	I0629 19:33:31.994337    6596 command_runner.go:130] > 08172ec4cee1
	I0629 19:33:31.994337    6596 command_runner.go:130] > 72903587275b
	I0629 19:33:31.994337    6596 command_runner.go:130] > aafba86db102
	I0629 19:33:31.994337    6596 command_runner.go:130] > 2b45ac9da375
	I0629 19:33:31.994337    6596 command_runner.go:130] > 2bebeee868d5
	I0629 19:33:31.994337    6596 command_runner.go:130] > 0870274494db
	I0629 19:33:32.005343    6596 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 19:33:32.050194    6596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 19:33:32.075502    6596 command_runner.go:130] > -rw------- 1 root root 5643 Jun 29 19:20 /etc/kubernetes/admin.conf
	I0629 19:33:32.075563    6596 command_runner.go:130] > -rw------- 1 root root 5652 Jun 29 19:20 /etc/kubernetes/controller-manager.conf
	I0629 19:33:32.075592    6596 command_runner.go:130] > -rw------- 1 root root 2055 Jun 29 19:21 /etc/kubernetes/kubelet.conf
	I0629 19:33:32.075592    6596 command_runner.go:130] > -rw------- 1 root root 5604 Jun 29 19:20 /etc/kubernetes/scheduler.conf
	I0629 19:33:32.075592    6596 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 19:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun 29 19:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2055 Jun 29 19:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun 29 19:20 /etc/kubernetes/scheduler.conf
	
	I0629 19:33:32.084921    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 19:33:32.111789    6596 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0629 19:33:32.122400    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 19:33:32.150578    6596 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0629 19:33:32.160490    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 19:33:32.188499    6596 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:32.198212    6596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 19:33:32.234858    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 19:33:32.266820    6596 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:32.275777    6596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 19:33:32.314469    6596 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 19:33:32.339650    6596 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 19:33:32.339650    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:32.429738    6596 command_runner.go:130] ! W0629 19:33:32.429615    1197 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using the existing "sa" key
	I0629 19:33:32.464469    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:32.549056    6596 command_runner.go:130] ! W0629 19:33:32.545226    1209 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0629 19:33:33.572150    6596 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1076733s)
	I0629 19:33:33.572150    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:33.658772    6596 command_runner.go:130] ! W0629 19:33:33.654999    1223 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:33.979839    6596 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0629 19:33:33.979839    6596 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0629 19:33:33.979839    6596 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0629 19:33:33.979839    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:34.124992    6596 command_runner.go:130] ! W0629 19:33:34.120743    1274 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0629 19:33:34.212332    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:34.333764    6596 command_runner.go:130] ! W0629 19:33:34.329223    1297 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:34.501017    6596 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0629 19:33:34.501153    6596 api_server.go:51] waiting for apiserver process to appear ...
	I0629 19:33:34.516232    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:35.136573    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:35.634228    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:36.137923    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:36.637226    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:37.138506    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:37.634531    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:38.144986    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:38.307571    6596 command_runner.go:130] > 1841
	I0629 19:33:38.307571    6596 api_server.go:71] duration metric: took 3.8065286s to wait for apiserver process to appear ...
	I0629 19:33:38.308128    6596 api_server.go:87] waiting for apiserver healthz status ...
	I0629 19:33:38.308181    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:38.315346    6596 api_server.go:256] stopped: https://127.0.0.1:54819/healthz: Get "https://127.0.0.1:54819/healthz": EOF
	I0629 19:33:38.821601    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:43.831348    6596 api_server.go:256] stopped: https://127.0.0.1:54819/healthz: Get "https://127.0.0.1:54819/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0629 19:33:44.322567    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:44.600737    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:44.600874    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:44.816593    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:44.837496    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:44.837496    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:45.323644    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:45.348473    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:45.348473    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:45.821683    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:45.844403    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:45.844454    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:46.323922    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:46.352270    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 200:
	ok
	I0629 19:33:46.352945    6596 round_trippers.go:463] GET https://127.0.0.1:54819/version
	I0629 19:33:46.352970    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:46.352999    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:46.353024    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:46.376391    6596 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0629 19:33:46.376481    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:46.376481    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:46.376481    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:46.376481    6596 round_trippers.go:580]     Content-Length: 263
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:46 GMT
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Audit-Id: 1dffc048-0e23-48a5-8c86-08a944048159
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:46.376659    6596 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.2",
	  "gitCommit": "f66044f4361b9f1f96f0053dd46cb7dce5e990a8",
	  "gitTreeState": "clean",
	  "buildDate": "2022-06-15T14:15:38Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0629 19:33:46.376778    6596 api_server.go:140] control plane version: v1.24.2
	I0629 19:33:46.376857    6596 api_server.go:130] duration metric: took 8.068622s to wait for apiserver health ...
	I0629 19:33:46.376857    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:33:46.376857    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:33:46.381353    6596 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0629 19:33:46.397899    6596 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0629 19:33:46.415351    6596 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0629 19:33:46.415351    6596 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0629 19:33:46.415351    6596 command_runner.go:130] > Device: c7h/199d	Inode: 24833       Links: 1
	I0629 19:33:46.415351    6596 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0629 19:33:46.415351    6596 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0629 19:33:46.415351    6596 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0629 19:33:46.415351    6596 command_runner.go:130] > Change: 2022-06-29 17:58:56.673342000 +0000
	I0629 19:33:46.415351    6596 command_runner.go:130] >  Birth: -
	I0629 19:33:46.415351    6596 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0629 19:33:46.416192    6596 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0629 19:33:46.628595    6596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0629 19:33:52.200618    6596 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0629 19:33:52.200618    6596 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0629 19:33:52.200618    6596 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0629 19:33:52.200618    6596 command_runner.go:130] > daemonset.apps/kindnet configured
	I0629 19:33:52.200618    6596 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.5719856s)
	I0629 19:33:52.201177    6596 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 19:33:52.201392    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:33:52.201392    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:52.201392    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:52.201644    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:52.311058    6596 round_trippers.go:574] Response Status: 200 OK in 109 milliseconds
	I0629 19:33:52.311058    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:52.311146    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:52.311146    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:52.311199    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:52.311199    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:52 GMT
	I0629 19:33:52.311199    6596 round_trippers.go:580]     Audit-Id: 87402234-e39b-4a34-bc6a-da76ab5ea9fc
	I0629 19:33:52.311199    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:52.318183    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1172"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1129","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 85061 chars]
	I0629 19:33:52.324356    6596 system_pods.go:59] 12 kube-system pods found
	I0629 19:33:52.324356    6596 system_pods.go:61] "coredns-6d4b75cb6d-6vjv2" [957527e4-431b-450f-b20f-ead3b2989f97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 19:33:52.324356    6596 system_pods.go:61] "etcd-multinode-20220629191914-2408" [afa29b2e-ffc8-4567-bc07-a20bcc1715c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0629 19:33:52.324356    6596 system_pods.go:61] "kindnet-b7v2g" [9febc0b9-2af4-478d-acca-bb892672edc1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0629 19:33:52.324356    6596 system_pods.go:61] "kindnet-q54ld" [db15743e-e6f4-41c8-b655-898eb39adcc6] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kindnet-wbwzc" [dbc2ed3b-1dbe-446b-b485-85f5ff911200] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-apiserver-multinode-20220629191914-2408" [304971a1-1934-418a-997d-b648ac8c4540] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-controller-manager-multinode-20220629191914-2408" [72c39e43-772d-46ed-9bea-9be30695e2cf] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-proxy-2mz9l" [0e6449b8-a82c-4e7f-a4a8-a595b07382f3] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-proxy-5djlc" [734589bd-4941-4bad-bf82-8782fba95fb0] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-proxy-bccdh" [a949d16f-893b-4f7a-969c-45249a4800e7] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-scheduler-multinode-20220629191914-2408" [480afc74-9ecd-4957-a8c1-00d3589ebe52] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "storage-provisioner" [ad5ec42d-16a3-429c-a3d7-c08eeb03dcae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 19:33:52.324356    6596 system_pods.go:74] duration metric: took 123.1783ms to wait for pod list to return data ...
	I0629 19:33:52.324929    6596 node_conditions.go:102] verifying NodePressure condition ...
	I0629 19:33:52.324929    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes
	I0629 19:33:52.325053    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:52.325053    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:52.325053    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:52.419663    6596 round_trippers.go:574] Response Status: 200 OK in 94 milliseconds
	I0629 19:33:52.419663    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:52.419663    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:52.419663    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:52 GMT
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Audit-Id: 869ec1e5-355a-41ab-865a-f8ecb19742a5
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:52.420635    6596 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1173"},"items":[{"metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-ma
naged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","ope [truncated 16112 chars]
	I0629 19:33:52.422044    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:33:52.422507    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:33:52.422507    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:33:52.422507    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:33:52.422507    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:33:52.422507    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:33:52.422507    6596 node_conditions.go:105] duration metric: took 97.5778ms to run NodePressure ...
	I0629 19:33:52.422601    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:53.023460    6596 command_runner.go:130] ! W0629 19:33:53.018766    2898 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:53.631086    6596 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0629 19:33:53.631086    6596 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0629 19:33:53.631086    6596 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2084601s)
	I0629 19:33:53.631086    6596 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 19:33:53.631086    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0629 19:33:53.631086    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:53.631086    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:53.631086    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:53.640734    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:53.640734    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:53.640734    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:53.640734    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:53 GMT
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Audit-Id: 2d7f9ee9-0a2e-458e-ad0f-f0f74cb2069d
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:53.641556    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1185"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30242 chars]
	I0629 19:33:53.643120    6596 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0629 19:33:53.919817    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0629 19:33:53.919817    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:53.919817    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:53.919817    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:53.930541    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:53.930541    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:53.930541    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:53.930541    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:53.931073    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:53.931073    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:53 GMT
	I0629 19:33:53.931073    6596 round_trippers.go:580]     Audit-Id: d6e45913-16b2-4d17-a38c-7702c7ae70f1
	I0629 19:33:53.931073    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:53.931230    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1186"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30242 chars]
	I0629 19:33:53.933085    6596 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0629 19:33:54.484616    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0629 19:33:54.484715    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:54.484715    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:54.484715    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:54.504808    6596 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0629 19:33:54.504905    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:54.504971    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:54.505024    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:54 GMT
	I0629 19:33:54.505024    6596 round_trippers.go:580]     Audit-Id: 355cd416-fee8-47c1-bb9e-4c6f61335a6c
	I0629 19:33:54.505024    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:54.505099    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:54.505130    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:54.505662    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1192"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30242 chars]
	I0629 19:33:54.507323    6596 retry.go:31] will retry after 655.06503ms: kubelet not initialised
	I0629 19:33:55.163153    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0629 19:33:55.163153    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.163153    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.163153    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.172498    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:55.172523    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.172523    6596 round_trippers.go:580]     Audit-Id: f768d4a3-904f-4d8f-86d5-c6e0a217240b
	I0629 19:33:55.172583    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.172603    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.172603    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.172603    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.172646    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.173453    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 31162 chars]
	I0629 19:33:55.175885    6596 kubeadm.go:777] kubelet initialised
	I0629 19:33:55.175917    6596 kubeadm.go:778] duration metric: took 1.5448204s waiting for restarted kubelet to initialise ...
	I0629 19:33:55.175917    6596 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:33:55.176095    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:33:55.176095    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.176095    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.176095    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.189033    6596 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0629 19:33:55.189033    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Audit-Id: 1d28989b-ccea-477a-91d8-95a6d568e580
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.189033    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.189033    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.193031    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 85062 chars]
	I0629 19:33:55.196959    6596 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.196959    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-6vjv2
	I0629 19:33:55.196959    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.196959    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.196959    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.203566    6596 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0629 19:33:55.203566    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.203566    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.203566    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Audit-Id: 8c82db42-9e77-43cc-9591-7fefe81de8d7
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.203566    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f
:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f: [truncated 6191 chars]
	I0629 19:33:55.204321    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.204321    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.204321    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.204321    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.214481    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:55.214533    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.214533    6596 round_trippers.go:580]     Audit-Id: e300df70-a2fa-46c4-97e6-f7c88887318a
	I0629 19:33:55.214533    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.214569    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.214569    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.214569    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.214606    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.214702    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.215251    6596 pod_ready.go:92] pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:55.215251    6596 pod_ready.go:81] duration metric: took 18.2917ms waiting for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.215251    6596 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.215417    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/etcd-multinode-20220629191914-2408
	I0629 19:33:55.215504    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.215504    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.215504    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.223522    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:55.223859    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.223859    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.223859    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.223913    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.223913    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.223913    6596 round_trippers.go:580]     Audit-Id: c88276f7-be11-47e1-8625-d9251c2ca59e
	I0629 19:33:55.223913    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.223913    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/ [truncated 6048 chars]
	I0629 19:33:55.224567    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.224595    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.224595    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.224653    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.232107    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:55.232107    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.232107    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Audit-Id: adc7c30b-4ec5-4f6b-9da4-e233a579c604
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.232107    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.232824    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.232824    6596 pod_ready.go:92] pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:55.232824    6596 pod_ready.go:81] duration metric: took 17.4734ms waiting for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.232824    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.232824    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220629191914-2408
	I0629 19:33:55.232824    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.232824    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.232824    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.238821    6596 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0629 19:33:55.238821    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Audit-Id: a61f59ad-ddcf-4610-b4cb-6736bb9486e4
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.238821    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.238821    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.238821    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220629191914-2408","namespace":"kube-system","uid":"304971a1-1934-418a-997d-b648ac8c4540","resourceVersion":"1178","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.mirror":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.seen":"2022-06-29T19:21:09.098334300Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","
fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{ [truncated 8515 chars]
	I0629 19:33:55.238821    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.238821    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.238821    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.238821    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.254030    6596 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0629 19:33:55.254030    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.254558    6596 round_trippers.go:580]     Audit-Id: 86ce1703-cedf-4a84-b2f4-49ed5bd60494
	I0629 19:33:55.254558    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.254558    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.254655    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.254655    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.254655    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.254655    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.255911    6596 pod_ready.go:92] pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:55.255911    6596 pod_ready.go:81] duration metric: took 23.0867ms waiting for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.255911    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.255911    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:55.255911    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.255911    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.255911    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.268841    6596 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0629 19:33:55.268841    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.268841    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.268841    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Audit-Id: 8da6ba79-0654-45f7-87ae-556404147c9f
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.268841    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1196","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8350 chars]
	I0629 19:33:55.269838    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.269838    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.269838    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.269838    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.276849    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:55.276849    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.276849    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.276849    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Audit-Id: 9a65f286-cf4f-4743-8cfd-5bb4c0fd8153
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.277846    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.781456    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:55.781456    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.781456    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.781456    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.791164    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:55.791164    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Audit-Id: f3a8e9c7-b6d1-4436-be44-7bf09c5795c6
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.791164    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.791164    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.791164    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1196","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8350 chars]
	I0629 19:33:55.792549    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.792549    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.792549    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.792549    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.803056    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:55.803056    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.803056    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.803056    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Audit-Id: e85dca24-7398-4575-8fb1-640077e65acb
	I0629 19:33:55.803527    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:56.288729    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:56.288729    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.288811    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.288811    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.298986    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:56.299024    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.299024    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.299179    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.299179    6596 round_trippers.go:580]     Audit-Id: eab87892-4024-4496-9664-4ba4755e61af
	I0629 19:33:56.299234    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.299234    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.299234    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.299474    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1208","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8088 chars]
	I0629 19:33:56.300061    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:56.300116    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.300116    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.300176    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.308918    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:56.308918    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.308918    6596 round_trippers.go:580]     Audit-Id: 4d1bd454-af19-4abb-ac74-8ec090a92bae
	I0629 19:33:56.308918    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.308918    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.309912    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.309912    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.309912    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.309912    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:56.310617    6596 pod_ready.go:92] pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:56.310777    6596 pod_ready.go:81] duration metric: took 1.0548598s waiting for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.310777    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.310928    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-2mz9l
	I0629 19:33:56.310928    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.310928    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.311017    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.319997    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:56.319997    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.319997    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.319997    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Audit-Id: 226fd44e-1cdd-4cab-9b15-ceb3d570f776
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.319997    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2mz9l","generateName":"kube-proxy-","namespace":"kube-system","uid":"0e6449b8-a82c-4e7f-a4a8-a595b07382f3","resourceVersion":"538","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5547 chars]
	I0629 19:33:56.369206    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:33:56.369206    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.369206    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.369206    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.376518    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:56.376627    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.376627    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.376627    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.376627    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.376627    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.376693    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.376693    6596 round_trippers.go:580]     Audit-Id: 2443a0bc-cdca-4087-b87d-cf626931d73a
	I0629 19:33:56.376848    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m02","uid":"aaf41655-3991-4e63-82df-36b045e3e43c","resourceVersion":"920","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 4539 chars]
	I0629 19:33:56.376848    6596 pod_ready.go:92] pod "kube-proxy-2mz9l" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:56.376848    6596 pod_ready.go:81] duration metric: took 66.07ms waiting for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.376848    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.569121    6596 request.go:533] Waited for 192.0968ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:33:56.569437    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:33:56.569437    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.569437    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.569437    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.577363    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:56.577403    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.577434    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.577434    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Audit-Id: 6bb7d709-ce60-4e3f-a257-c4c8cd36a835
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.577654    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5djlc","generateName":"kube-proxy-","namespace":"kube-system","uid":"734589bd-4941-4bad-bf82-8782fba95fb0","resourceVersion":"1169","creationTimestamp":"2022-06-29T19:21:20Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5745 chars]
	I0629 19:33:56.771207    6596 request.go:533] Waited for 192.7675ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:56.771294    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:56.771294    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.771294    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.771294    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.781423    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:56.781480    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.781514    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.781514    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Audit-Id: 6f3577c3-f86a-4e9a-81b5-8d2c65e49103
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.781544    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:56.782120    6596 pod_ready.go:92] pod "kube-proxy-5djlc" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:56.782120    6596 pod_ready.go:81] duration metric: took 405.2696ms waiting for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.782120    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.965065    6596 request.go:533] Waited for 182.6394ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:33:56.965158    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:33:56.965158    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.965158    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.965392    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.972835    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:56.972835    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.972835    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.972835    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Audit-Id: 4dd77c45-3d97-4b32-856b-639dce66bdb3
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.972835    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bccdh","generateName":"kube-proxy-","namespace":"kube-system","uid":"a949d16f-893b-4f7a-969c-45249a4800e7","resourceVersion":"1100","creationTimestamp":"2022-06-29T19:26:11Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:26:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5753 chars]
	I0629 19:33:57.171134    6596 request.go:533] Waited for 197.0902ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:33:57.171917    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:33:57.171917    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.171917    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.171986    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.180086    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:57.180109    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Audit-Id: 4c051343-83c1-4192-857e-bd95d011bbbf
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.180109    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.180109    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.180109    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m03","uid":"a730aee4-fd4f-4ea7-9eba-d4268a85cdf0","resourceVersion":"1086","creationTimestamp":"2022-06-29T19:31:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"202
2-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f [truncated 4211 chars]
	I0629 19:33:57.180631    6596 pod_ready.go:92] pod "kube-proxy-bccdh" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:57.180764    6596 pod_ready.go:81] duration metric: took 398.6413ms waiting for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:57.180796    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:57.373880    6596 request.go:533] Waited for 192.854ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:33:57.373970    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:33:57.373970    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.373970    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.373970    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.381510    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:57.381574    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Audit-Id: a8a5fc86-81ac-44bb-b1f9-8cd3adca30c1
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.381602    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.381602    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.381602    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220629191914-2408","namespace":"kube-system","uid":"480afc74-9ecd-4957-a8c1-00d3589ebe52","resourceVersion":"1202","creationTimestamp":"2022-06-29T19:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.mirror":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.seen":"2022-06-29T19:20:50.548921500Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes
.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io [truncated 4972 chars]
	I0629 19:33:57.577489    6596 request.go:533] Waited for 194.8206ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:57.577489    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:57.577489    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.577489    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.577489    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.604690    6596 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0629 19:33:57.605167    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.605302    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.605393    6596 round_trippers.go:580]     Audit-Id: 06f4a5c7-4952-49dc-9c66-a1e27228920c
	I0629 19:33:57.605393    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.605393    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.605393    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.605393    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.605393    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:57.606371    6596 pod_ready.go:92] pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:57.606371    6596 pod_ready.go:81] duration metric: took 425.5385ms waiting for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:57.606371    6596 pod_ready.go:38] duration metric: took 2.4304377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:33:57.606492    6596 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 19:33:57.810186    6596 command_runner.go:130] > -16
	I0629 19:33:57.810186    6596 ops.go:34] apiserver oom_adj: -16
	I0629 19:33:57.810186    6596 kubeadm.go:630] restartCluster took 30.3184901s
	I0629 19:33:57.810186    6596 kubeadm.go:397] StartCluster complete in 30.4325536s
	I0629 19:33:57.810744    6596 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 19:33:57.811029    6596 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:57.812553    6596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 19:33:57.824438    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:57.825483    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\ca.crt", CertData:[]u
int8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:33:57.826692    6596 cert_rotation.go:137] Starting client certificate rotation controller
	I0629 19:33:57.826692    6596 round_trippers.go:463] GET https://127.0.0.1:54819/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0629 19:33:57.826692    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.826692    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.826692    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.849810    6596 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0629 19:33:57.849810    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.849810    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Content-Length: 292
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Audit-Id: f8ce193f-964e-49b8-9d1e-103e3e669926
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.849810    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.849810    6596 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e3b60944-576d-4023-b66a-3fdcbedd3a25","resourceVersion":"1184","creationTimestamp":"2022-06-29T19:21:08Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0629 19:33:57.849810    6596 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220629191914-2408" rescaled to 1
	I0629 19:33:57.850849    6596 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 19:33:57.855864    6596 out.go:177] * Verifying Kubernetes components...
	I0629 19:33:57.850849    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 19:33:57.850849    6596 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0629 19:33:57.850849    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:33:57.858842    6596 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220629191914-2408"
	I0629 19:33:57.858842    6596 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220629191914-2408"
	W0629 19:33:57.858842    6596 addons.go:162] addon storage-provisioner should already be in state true
	I0629 19:33:57.858842    6596 addons.go:65] Setting default-storageclass=true in profile "multinode-20220629191914-2408"
	I0629 19:33:57.858842    6596 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220629191914-2408"
	I0629 19:33:57.858842    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:33:57.868852    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 19:33:57.876799    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:57.877803    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:58.039351    6596 command_runner.go:130] > apiVersion: v1
	I0629 19:33:58.039351    6596 command_runner.go:130] > data:
	I0629 19:33:58.039351    6596 command_runner.go:130] >   Corefile: |
	I0629 19:33:58.039351    6596 command_runner.go:130] >     .:53 {
	I0629 19:33:58.039351    6596 command_runner.go:130] >         errors
	I0629 19:33:58.039351    6596 command_runner.go:130] >         health {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            lameduck 5s
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         ready
	I0629 19:33:58.039351    6596 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            pods insecure
	I0629 19:33:58.039351    6596 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0629 19:33:58.039351    6596 command_runner.go:130] >            ttl 30
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         prometheus :9153
	I0629 19:33:58.039351    6596 command_runner.go:130] >         hosts {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0629 19:33:58.039351    6596 command_runner.go:130] >            fallthrough
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            max_concurrent 1000
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         cache 30
	I0629 19:33:58.039351    6596 command_runner.go:130] >         loop
	I0629 19:33:58.039351    6596 command_runner.go:130] >         reload
	I0629 19:33:58.039351    6596 command_runner.go:130] >         loadbalance
	I0629 19:33:58.039351    6596 command_runner.go:130] >     }
	I0629 19:33:58.039351    6596 command_runner.go:130] > kind: ConfigMap
	I0629 19:33:58.039351    6596 command_runner.go:130] > metadata:
	I0629 19:33:58.039351    6596 command_runner.go:130] >   creationTimestamp: "2022-06-29T19:21:08Z"
	I0629 19:33:58.039351    6596 command_runner.go:130] >   name: coredns
	I0629 19:33:58.039351    6596 command_runner.go:130] >   namespace: kube-system
	I0629 19:33:58.039351    6596 command_runner.go:130] >   resourceVersion: "383"
	I0629 19:33:58.039351    6596 command_runner.go:130] >   uid: fad4f9c4-c0ea-4ac6-ab7a-2148242c8a5e
	I0629 19:33:58.039351    6596 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0629 19:33:58.052321    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:59.019299    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.141488s)
	I0629 19:33:59.020299    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:59.020299    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\ca.crt", CertData:[]u
int8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:33:59.021307    6596 round_trippers.go:463] GET https://127.0.0.1:54819/apis/storage.k8s.io/v1/storageclasses
	I0629 19:33:59.021307    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.021307    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.021307    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.031297    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.15449s)
	I0629 19:33:59.034298    6596 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 19:33:59.036911    6596 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 19:33:59.036911    6596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 19:33:59.046478    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:59.108569    6596 round_trippers.go:574] Response Status: 200 OK in 87 milliseconds
	I0629 19:33:59.108651    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.108691    6596 round_trippers.go:580]     Audit-Id: 4fd23ea0-bc9c-413a-b3ad-b96733f71102
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.108729    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.108729    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Content-Length: 1274
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.108868    6596 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"standard","uid":"c9d8c037-c78f-4b3b-b4b1-ffbf158fdff0","resourceVersion":"396","creationTimestamp":"2022-06-29T19:21:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-06-29T19:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0629 19:33:59.109973    6596 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c9d8c037-c78f-4b3b-b4b1-ffbf158fdff0","resourceVersion":"396","creationTimestamp":"2022-06-29T19:21:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-06-29T19:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0629 19:33:59.110109    6596 round_trippers.go:463] PUT https://127.0.0.1:54819/apis/storage.k8s.io/v1/storageclasses/standard
	I0629 19:33:59.110139    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.110139    6596 round_trippers.go:473]     Content-Type: application/json
	I0629 19:33:59.110139    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.110139    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.200642    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1483129s)
	I0629 19:33:59.200642    6596 node_ready.go:35] waiting up to 6m0s for node "multinode-20220629191914-2408" to be "Ready" ...
	I0629 19:33:59.201741    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.201741    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.201741    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.201741    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.203633    6596 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0629 19:33:59.204639    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.204639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.204639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Content-Length: 1220
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Audit-Id: a25940ec-95f1-4f11-b982-6e723a076e49
	I0629 19:33:59.204639    6596 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c9d8c037-c78f-4b3b-b4b1-ffbf158fdff0","resourceVersion":"396","creationTimestamp":"2022-06-29T19:21:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-06-29T19:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0629 19:33:59.205659    6596 addons.go:153] Setting addon default-storageclass=true in "multinode-20220629191914-2408"
	W0629 19:33:59.205659    6596 addons.go:162] addon default-storageclass should already be in state true
	I0629 19:33:59.206642    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:33:59.209634    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:59.209634    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.209634    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.209634    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Audit-Id: 1cb711a1-c7d9-48c4-838c-1220a40c9ec9
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.209634    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.210639    6596 node_ready.go:49] node "multinode-20220629191914-2408" has status "Ready":"True"
	I0629 19:33:59.210639    6596 node_ready.go:38] duration metric: took 9.9971ms waiting for node "multinode-20220629191914-2408" to be "Ready" ...
	I0629 19:33:59.210639    6596 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:33:59.210639    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:33:59.210639    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.210639    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.210639    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.231613    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:59.312061    6596 round_trippers.go:574] Response Status: 200 OK in 101 milliseconds
	I0629 19:33:59.312061    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.312203    6596 round_trippers.go:580]     Audit-Id: 731bf19e-12f4-4803-85e9-533a284946d0
	I0629 19:33:59.312203    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.312325    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.312395    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.312395    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.312484    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.320057    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 84556 chars]
	I0629 19:33:59.325916    6596 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.325916    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-6vjv2
	I0629 19:33:59.325916    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.325916    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.325916    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.403223    6596 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I0629 19:33:59.403344    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.403417    6596 round_trippers.go:580]     Audit-Id: 2b392e97-fc47-482e-981d-232f775c95e1
	I0629 19:33:59.403417    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.403417    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.403417    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.403555    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.403555    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.403812    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f
:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f: [truncated 6191 chars]
	I0629 19:33:59.404888    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.404888    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.405212    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.405259    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.421532    6596 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0629 19:33:59.421590    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.421590    6596 round_trippers.go:580]     Audit-Id: 9bce5c77-9bad-4505-a460-4a1c6057766f
	I0629 19:33:59.421590    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.421693    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.421693    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.421741    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.421741    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.422066    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.422677    6596 pod_ready.go:92] pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.422677    6596 pod_ready.go:81] duration metric: took 96.7602ms waiting for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.422677    6596 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.422677    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/etcd-multinode-20220629191914-2408
	I0629 19:33:59.422677    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.422677    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.422677    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.431562    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:59.431562    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Audit-Id: 8d326192-5555-47d7-8ca2-eaab7c6d16e7
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.431562    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.431562    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.431562    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/ [truncated 6048 chars]
	I0629 19:33:59.433348    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.433411    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.433411    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.433466    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.501848    6596 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I0629 19:33:59.501848    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.501848    6596 round_trippers.go:580]     Audit-Id: f6222e1d-c339-4611-b75b-fed5942ae3e5
	I0629 19:33:59.501848    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.501848    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.501848    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.502038    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.502062    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.502246    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.502768    6596 pod_ready.go:92] pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.502768    6596 pod_ready.go:81] duration metric: took 80.091ms waiting for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.502768    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.502933    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220629191914-2408
	I0629 19:33:59.502933    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.502933    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.502933    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.518482    6596 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0629 19:33:59.518600    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.518600    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.518600    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Audit-Id: d1ef75dc-25ba-459e-aa61-1f5b6a88aedf
	I0629 19:33:59.519171    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220629191914-2408","namespace":"kube-system","uid":"304971a1-1934-418a-997d-b648ac8c4540","resourceVersion":"1178","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.mirror":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.seen":"2022-06-29T19:21:09.098334300Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","
fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{ [truncated 8515 chars]
	I0629 19:33:59.519744    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.519744    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.519744    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.519744    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.538368    6596 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0629 19:33:59.538461    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.538506    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Audit-Id: 581540b0-5582-4d36-b7e7-69351ffe5fcd
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.538547    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.538923    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.539249    6596 pod_ready.go:92] pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.539249    6596 pod_ready.go:81] duration metric: took 36.3534ms waiting for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.539249    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.539793    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:59.539793    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.539874    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.539874    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.603846    6596 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0629 19:33:59.603936    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.603936    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.603936    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.603936    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.604019    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.604019    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.604019    6596 round_trippers.go:580]     Audit-Id: 8863cf8c-3aec-427c-84c4-45c95fabcb4d
	I0629 19:33:59.604313    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1208","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8088 chars]
	I0629 19:33:59.605086    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.605086    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.605086    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.605086    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.614639    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:59.614639    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.614639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.614639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Audit-Id: 8fcf9fb0-81f4-412f-9af4-b796f4983146
	I0629 19:33:59.614639    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.615835    6596 pod_ready.go:92] pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.615835    6596 pod_ready.go:81] duration metric: took 76.5854ms waiting for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.615835    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.769690    6596 request.go:533] Waited for 153.7722ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-2mz9l
	I0629 19:33:59.769955    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-2mz9l
	I0629 19:33:59.769955    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.769955    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.769955    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.780024    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:59.780090    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Audit-Id: d6e08b0e-305f-4d19-8b55-eb8c430f893b
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.780090    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.780090    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.780353    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2mz9l","generateName":"kube-proxy-","namespace":"kube-system","uid":"0e6449b8-a82c-4e7f-a4a8-a595b07382f3","resourceVersion":"538","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5547 chars]
	I0629 19:33:59.975489    6596 request.go:533] Waited for 194.4153ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:33:59.975907    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:33:59.975961    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.975961    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.975961    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.006409    6596 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0629 19:34:00.006524    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Audit-Id: c93e81dc-5713-470b-9c38-9e1875fb1880
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.006524    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.006524    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.006977    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m02","uid":"aaf41655-3991-4e63-82df-36b045e3e43c","resourceVersion":"920","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 4539 chars]
	I0629 19:34:00.007556    6596 pod_ready.go:92] pod "kube-proxy-2mz9l" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:00.007556    6596 pod_ready.go:81] duration metric: took 391.7185ms waiting for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.007556    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.178587    6596 request.go:533] Waited for 170.3891ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:34:00.178587    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:34:00.178587    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.178587    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.178587    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.193015    6596 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0629 19:34:00.193142    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.193235    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.193333    6596 round_trippers.go:580]     Audit-Id: 320719bb-8c63-49a0-b064-5753700e1437
	I0629 19:34:00.193425    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.193425    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.193425    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.193425    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.196144    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5djlc","generateName":"kube-proxy-","namespace":"kube-system","uid":"734589bd-4941-4bad-bf82-8782fba95fb0","resourceVersion":"1169","creationTimestamp":"2022-06-29T19:21:20Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5745 chars]
	I0629 19:34:00.285752    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.239266s)
	I0629 19:34:00.285884    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:00.373321    6596 request.go:533] Waited for 176.7ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:00.373495    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:00.373495    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.373548    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.373745    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.383233    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:34:00.383233    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.383233    6596 round_trippers.go:580]     Audit-Id: 41e560cb-4f78-4573-8b4c-c97f664b48fd
	I0629 19:34:00.383233    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.383233    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.383233    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.384250    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.384250    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.384250    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:34:00.384250    6596 pod_ready.go:92] pod "kube-proxy-5djlc" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:00.384250    6596 pod_ready.go:81] duration metric: took 376.6918ms waiting for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.384250    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.409248    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.1776263s)
	I0629 19:34:00.409248    6596 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 19:34:00.409248    6596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 19:34:00.416234    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:34:00.461491    6596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 19:34:00.567132    6596 request.go:533] Waited for 182.8159ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:34:00.567389    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:34:00.567389    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.567389    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.567389    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.601588    6596 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0629 19:34:00.601588    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.601588    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.601588    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Audit-Id: b0542b61-3e6c-44fa-a360-8272d090f84e
	I0629 19:34:00.601588    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bccdh","generateName":"kube-proxy-","namespace":"kube-system","uid":"a949d16f-893b-4f7a-969c-45249a4800e7","resourceVersion":"1100","creationTimestamp":"2022-06-29T19:26:11Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:26:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5753 chars]
	I0629 19:34:00.766356    6596 request.go:533] Waited for 163.4063ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:34:00.766434    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:34:00.766434    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.766434    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.766552    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.774597    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:34:00.774647    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Audit-Id: 7155faef-2425-4606-867e-e4adf1d0c736
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.774647    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.774647    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.775205    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m03","uid":"a730aee4-fd4f-4ea7-9eba-d4268a85cdf0","resourceVersion":"1086","creationTimestamp":"2022-06-29T19:31:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"202
2-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f [truncated 4211 chars]
	I0629 19:34:00.775642    6596 pod_ready.go:92] pod "kube-proxy-bccdh" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:00.775716    6596 pod_ready.go:81] duration metric: took 391.4247ms waiting for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.775716    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.819055    6596 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > pod/storage-provisioner configured
	I0629 19:34:00.964497    6596 request.go:533] Waited for 188.7164ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:34:00.964497    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:34:00.964497    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.964604    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.964604    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.974561    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:34:00.974604    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.974604    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.974604    6596 round_trippers.go:580]     Audit-Id: ed993975-0657-4a09-b0af-a666cc59c402
	I0629 19:34:00.974660    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.974660    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.974660    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.974660    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.974936    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220629191914-2408","namespace":"kube-system","uid":"480afc74-9ecd-4957-a8c1-00d3589ebe52","resourceVersion":"1202","creationTimestamp":"2022-06-29T19:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.mirror":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.seen":"2022-06-29T19:20:50.548921500Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes
.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io [truncated 4972 chars]
	I0629 19:34:01.169314    6596 request.go:533] Waited for 194.025ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:01.169466    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:01.169659    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.169659    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.169659    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.184289    6596 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0629 19:34:01.184349    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.184349    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.184349    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.184454    6596 round_trippers.go:580]     Audit-Id: 96a53adf-5fc7-4bd1-a935-0c67d8cda63d
	I0629 19:34:01.184454    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.184454    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.184569    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.185020    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:34:01.185822    6596 pod_ready.go:92] pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:01.185920    6596 pod_ready.go:81] duration metric: took 410.2016ms waiting for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:01.185965    6596 pod_ready.go:38] duration metric: took 1.9753124s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:34:01.185998    6596 api_server.go:51] waiting for apiserver process to appear ...
	I0629 19:34:01.197317    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:34:01.237291    6596 command_runner.go:130] > 1841
	I0629 19:34:01.237291    6596 api_server.go:71] duration metric: took 3.3864192s to wait for apiserver process to appear ...
	I0629 19:34:01.237291    6596 api_server.go:87] waiting for apiserver healthz status ...
	I0629 19:34:01.237291    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:34:01.261467    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 200:
	ok
	I0629 19:34:01.261467    6596 round_trippers.go:463] GET https://127.0.0.1:54819/version
	I0629 19:34:01.261467    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.261467    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.261467    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.266427    6596 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0629 19:34:01.266472    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.266472    6596 round_trippers.go:580]     Content-Length: 263
	I0629 19:34:01.266531    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.266531    6596 round_trippers.go:580]     Audit-Id: 369fa7a3-5f35-40f5-9054-1f498aeab8cc
	I0629 19:34:01.266531    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.266581    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.266581    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.266581    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.266581    6596 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.2",
	  "gitCommit": "f66044f4361b9f1f96f0053dd46cb7dce5e990a8",
	  "gitTreeState": "clean",
	  "buildDate": "2022-06-15T14:15:38Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0629 19:34:01.266693    6596 api_server.go:140] control plane version: v1.24.2
	I0629 19:34:01.266693    6596 api_server.go:130] duration metric: took 29.4023ms to wait for apiserver health ...
	I0629 19:34:01.266693    6596 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 19:34:01.373565    6596 request.go:533] Waited for 106.6408ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.373679    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.373679    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.373679    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.373814    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.387508    6596 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0629 19:34:01.387642    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.387642    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.387727    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.387727    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.387765    6596 round_trippers.go:580]     Audit-Id: da606766-888a-4077-a190-5934142a9ec9
	I0629 19:34:01.387765    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.387796    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.391410    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 84556 chars]
	I0629 19:34:01.394965    6596 system_pods.go:59] 12 kube-system pods found
	I0629 19:34:01.394965    6596 system_pods.go:61] "coredns-6d4b75cb6d-6vjv2" [957527e4-431b-450f-b20f-ead3b2989f97] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "etcd-multinode-20220629191914-2408" [afa29b2e-ffc8-4567-bc07-a20bcc1715c9] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kindnet-b7v2g" [9febc0b9-2af4-478d-acca-bb892672edc1] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kindnet-q54ld" [db15743e-e6f4-41c8-b655-898eb39adcc6] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kindnet-wbwzc" [dbc2ed3b-1dbe-446b-b485-85f5ff911200] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-apiserver-multinode-20220629191914-2408" [304971a1-1934-418a-997d-b648ac8c4540] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-controller-manager-multinode-20220629191914-2408" [72c39e43-772d-46ed-9bea-9be30695e2cf] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-proxy-2mz9l" [0e6449b8-a82c-4e7f-a4a8-a595b07382f3] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-proxy-5djlc" [734589bd-4941-4bad-bf82-8782fba95fb0] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-proxy-bccdh" [a949d16f-893b-4f7a-969c-45249a4800e7] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-scheduler-multinode-20220629191914-2408" [480afc74-9ecd-4957-a8c1-00d3589ebe52] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "storage-provisioner" [ad5ec42d-16a3-429c-a3d7-c08eeb03dcae] Running
	I0629 19:34:01.394965    6596 system_pods.go:74] duration metric: took 128.271ms to wait for pod list to return data ...
	I0629 19:34:01.394965    6596 default_sa.go:34] waiting for default service account to be created ...
	I0629 19:34:01.543296    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1270542s)
	I0629 19:34:01.543296    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:01.563440    6596 request.go:533] Waited for 168.428ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/default/serviceaccounts
	I0629 19:34:01.563483    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/default/serviceaccounts
	I0629 19:34:01.563483    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.563570    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.563570    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.573761    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:34:01.573761    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.573761    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.573761    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Content-Length: 262
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Audit-Id: dd93e89b-6f15-4a95-b04c-ff88e064731c
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.573761    6596 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4ae379a5-0caa-4b0a-a2f8-eac6048156ef","resourceVersion":"318","creationTimestamp":"2022-06-29T19:21:20Z"}}]}
	I0629 19:34:01.574308    6596 default_sa.go:45] found service account: "default"
	I0629 19:34:01.574308    6596 default_sa.go:55] duration metric: took 179.3416ms for default service account to be created ...
	I0629 19:34:01.574308    6596 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 19:34:01.712705    6596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 19:34:01.776211    6596 request.go:533] Waited for 201.7146ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.776431    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.776483    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.776516    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.776516    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.791318    6596 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0629 19:34:01.791419    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.791458    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.791744    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.791744    6596 round_trippers.go:580]     Audit-Id: d4a0a919-5b90-4958-a170-ceecb655f2a0
	I0629 19:34:01.791807    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.794312    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.794312    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.798546    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 84556 chars]
	I0629 19:34:01.805874    6596 system_pods.go:86] 12 kube-system pods found
	I0629 19:34:01.805874    6596 system_pods.go:89] "coredns-6d4b75cb6d-6vjv2" [957527e4-431b-450f-b20f-ead3b2989f97] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "etcd-multinode-20220629191914-2408" [afa29b2e-ffc8-4567-bc07-a20bcc1715c9] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kindnet-b7v2g" [9febc0b9-2af4-478d-acca-bb892672edc1] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kindnet-q54ld" [db15743e-e6f4-41c8-b655-898eb39adcc6] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kindnet-wbwzc" [dbc2ed3b-1dbe-446b-b485-85f5ff911200] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-apiserver-multinode-20220629191914-2408" [304971a1-1934-418a-997d-b648ac8c4540] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-controller-manager-multinode-20220629191914-2408" [72c39e43-772d-46ed-9bea-9be30695e2cf] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-proxy-2mz9l" [0e6449b8-a82c-4e7f-a4a8-a595b07382f3] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-proxy-5djlc" [734589bd-4941-4bad-bf82-8782fba95fb0] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-proxy-bccdh" [a949d16f-893b-4f7a-969c-45249a4800e7] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-scheduler-multinode-20220629191914-2408" [480afc74-9ecd-4957-a8c1-00d3589ebe52] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "storage-provisioner" [ad5ec42d-16a3-429c-a3d7-c08eeb03dcae] Running
	I0629 19:34:01.805874    6596 system_pods.go:126] duration metric: took 231.564ms to wait for k8s-apps to be running ...
	I0629 19:34:01.805874    6596 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 19:34:01.815583    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 19:34:02.046789    6596 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0629 19:34:02.046789    6596 system_svc.go:56] duration metric: took 240.9141ms WaitForService to wait for kubelet.
	I0629 19:34:02.046789    6596 kubeadm.go:572] duration metric: took 4.1959122s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 19:34:02.046789    6596 node_conditions.go:102] verifying NodePressure condition ...
	I0629 19:34:02.050352    6596 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0629 19:34:02.046789    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes
	I0629 19:34:02.053084    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:02.053084    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:02.053084    6596 addons.go:414] enableAddons completed in 4.2022071s
	I0629 19:34:02.053084    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:02.059492    6596 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0629 19:34:02.060183    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:02.060183    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:02.060228    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:02.060228    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:02 GMT
	I0629 19:34:02.060228    6596 round_trippers.go:580]     Audit-Id: d38df85f-1527-4699-8ae2-addff4e986be
	I0629 19:34:02.060266    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:02.060266    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:02.060409    6596 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-ma
naged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","ope [truncated 16112 chars]
	I0629 19:34:02.061556    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:34:02.061599    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:34:02.061644    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:34:02.061644    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:34:02.061644    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:34:02.061644    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:34:02.061644    6596 node_conditions.go:105] duration metric: took 14.8547ms to run NodePressure ...
	I0629 19:34:02.061697    6596 start.go:213] waiting for startup goroutines ...
	I0629 19:34:02.072702    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:34:02.072702    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:34:02.086372    6596 out.go:177] * Starting worker node multinode-20220629191914-2408-m02 in cluster multinode-20220629191914-2408
	I0629 19:34:02.088608    6596 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 19:34:02.091071    6596 out.go:177] * Pulling base image ...
	I0629 19:34:02.094150    6596 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 19:34:02.094228    6596 cache.go:57] Caching tarball of preloaded images
	I0629 19:34:02.094298    6596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 19:34:02.094482    6596 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 19:34:02.094899    6596 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 19:34:02.095177    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:34:03.181150    6596 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 19:34:03.181182    6596 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 19:34:03.181226    6596 cache.go:208] Successfully downloaded all kic artifacts
	I0629 19:34:03.181328    6596 start.go:352] acquiring machines lock for multinode-20220629191914-2408-m02: {Name:mka48302875babb74b783eb09491576883a88fd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 19:34:03.181546    6596 start.go:356] acquired machines lock for "multinode-20220629191914-2408-m02" in 181.5µs
	I0629 19:34:03.181679    6596 start.go:94] Skipping create...Using existing machine configuration
	I0629 19:34:03.181679    6596 fix.go:55] fixHost starting: m02
	I0629 19:34:03.196535    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}
	I0629 19:34:04.314089    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}: (1.1175458s)
	I0629 19:34:04.314089    6596 fix.go:103] recreateIfNeeded on multinode-20220629191914-2408-m02: state=Stopped err=<nil>
	W0629 19:34:04.314089    6596 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 19:34:04.317010    6596 out.go:177] * Restarting existing docker container for "multinode-20220629191914-2408-m02" ...
	I0629 19:34:04.327011    6596 cli_runner.go:164] Run: docker start multinode-20220629191914-2408-m02
	I0629 19:34:06.341392    6596 cli_runner.go:217] Completed: docker start multinode-20220629191914-2408-m02: (2.0143677s)
	I0629 19:34:06.354177    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}
	I0629 19:34:07.497047    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}: (1.1428624s)
	I0629 19:34:07.497047    6596 kic.go:416] container "multinode-20220629191914-2408-m02" state is running.
	I0629 19:34:07.506049    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02
	I0629 19:34:08.651700    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02: (1.1455742s)
	I0629 19:34:08.651759    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:34:08.654075    6596 machine.go:88] provisioning docker machine ...
	I0629 19:34:08.654146    6596 ubuntu.go:169] provisioning hostname "multinode-20220629191914-2408-m02"
	I0629 19:34:08.663282    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:09.799712    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1362719s)
	I0629 19:34:09.803317    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:09.804289    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:09.804359    6596 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220629191914-2408-m02 && echo "multinode-20220629191914-2408-m02" | sudo tee /etc/hostname
	I0629 19:34:10.039387    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220629191914-2408-m02
	
	I0629 19:34:10.048577    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:11.212748    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1641625s)
	I0629 19:34:11.215772    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:11.216821    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:11.216821    6596 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220629191914-2408-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220629191914-2408-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220629191914-2408-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 19:34:11.421075    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:34:11.423593    6596 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0629 19:34:11.423664    6596 ubuntu.go:177] setting up certificates
	I0629 19:34:11.423664    6596 provision.go:83] configureAuth start
	I0629 19:34:11.433579    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02
	I0629 19:34:12.550470    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02: (1.1163584s)
	I0629 19:34:12.550869    6596 provision.go:138] copyHostCerts
	I0629 19:34:12.551049    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem
	I0629 19:34:12.551343    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0629 19:34:12.551343    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0629 19:34:12.551793    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0629 19:34:12.553033    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem
	I0629 19:34:12.553033    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0629 19:34:12.553033    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0629 19:34:12.553801    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0629 19:34:12.554859    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem
	I0629 19:34:12.555137    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0629 19:34:12.555241    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0629 19:34:12.555755    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0629 19:34:12.556585    6596 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-20220629191914-2408-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220629191914-2408-m02]
	I0629 19:34:13.185644    6596 provision.go:172] copyRemoteCerts
	I0629 19:34:13.194483    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 19:34:13.201293    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:14.308967    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.107667s)
	I0629 19:34:14.309454    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:14.458671    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2641794s)
	I0629 19:34:14.458671    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0629 19:34:14.458671    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 19:34:14.519760    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0629 19:34:14.520476    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0629 19:34:14.574226    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0629 19:34:14.574771    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 19:34:14.630329    6596 provision.go:86] duration metric: configureAuth took 3.2066444s
	I0629 19:34:14.630403    6596 ubuntu.go:193] setting minikube options for container-runtime
	I0629 19:34:14.630934    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:34:14.639397    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:15.756928    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1174239s)
	I0629 19:34:15.761337    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:15.761794    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:15.761865    6596 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 19:34:15.916757    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 19:34:15.916809    6596 ubuntu.go:71] root file system type: overlay
	I0629 19:34:15.917203    6596 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 19:34:15.924774    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:17.043168    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1183863s)
	I0629 19:34:17.049229    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:17.050008    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:17.050008    6596 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 19:34:17.270796    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 19:34:17.270796    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:18.400082    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1289572s)
	I0629 19:34:18.405095    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:18.405095    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:18.405095    6596 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 19:34:18.624674    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:34:18.624674    6596 machine.go:91] provisioned docker machine in 9.9705325s
	I0629 19:34:18.624674    6596 start.go:306] post-start starting for "multinode-20220629191914-2408-m02" (driver="docker")
	I0629 19:34:18.624764    6596 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 19:34:18.638361    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 19:34:18.647019    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:19.777413    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1303134s)
	I0629 19:34:19.777413    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:19.923512    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2851433s)
	I0629 19:34:19.938449    6596 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 19:34:19.954543    6596 command_runner.go:130] > NAME="Ubuntu"
	I0629 19:34:19.954543    6596 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0629 19:34:19.954543    6596 command_runner.go:130] > ID=ubuntu
	I0629 19:34:19.954543    6596 command_runner.go:130] > ID_LIKE=debian
	I0629 19:34:19.954543    6596 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0629 19:34:19.954543    6596 command_runner.go:130] > VERSION_ID="20.04"
	I0629 19:34:19.954543    6596 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0629 19:34:19.954543    6596 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0629 19:34:19.954543    6596 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0629 19:34:19.954543    6596 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0629 19:34:19.954543    6596 command_runner.go:130] > VERSION_CODENAME=focal
	I0629 19:34:19.954543    6596 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0629 19:34:19.954543    6596 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 19:34:19.954543    6596 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 19:34:19.954543    6596 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 19:34:19.954543    6596 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 19:34:19.954543    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0629 19:34:19.955506    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0629 19:34:19.955506    6596 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> 24082.pem in /etc/ssl/certs
	I0629 19:34:19.955506    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /etc/ssl/certs/24082.pem
	I0629 19:34:19.965492    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 19:34:19.984498    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /etc/ssl/certs/24082.pem (1708 bytes)
	I0629 19:34:20.040702    6596 start.go:309] post-start completed in 1.4153872s
	I0629 19:34:20.052103    6596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 19:34:20.059098    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:21.197602    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1384961s)
	I0629 19:34:21.197602    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:21.291720    6596 command_runner.go:130] > 5%
	I0629 19:34:21.291720    6596 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2396083s)
	I0629 19:34:21.303684    6596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 19:34:21.322497    6596 command_runner.go:130] > 227G
	I0629 19:34:21.322497    6596 fix.go:57] fixHost completed within 18.140696s
	I0629 19:34:21.322497    6596 start.go:81] releasing machines lock for "multinode-20220629191914-2408-m02", held for 18.1408018s
	I0629 19:34:21.330510    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02
	I0629 19:34:22.470442    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02: (1.1399236s)
	I0629 19:34:22.474884    6596 out.go:177] * Found network options:
	I0629 19:34:22.479633    6596 out.go:177]   - NO_PROXY=192.168.58.2
	W0629 19:34:22.481225    6596 proxy.go:118] fail to check proxy env: Error ip not in block
	I0629 19:34:22.483302    6596 out.go:177]   - no_proxy=192.168.58.2
	W0629 19:34:22.483302    6596 proxy.go:118] fail to check proxy env: Error ip not in block
	W0629 19:34:22.483302    6596 proxy.go:118] fail to check proxy env: Error ip not in block
	I0629 19:34:22.488563    6596 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 19:34:22.496108    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:22.497098    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 19:34:22.504101    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:23.649791    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1536758s)
	I0629 19:34:23.649791    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:23.665556    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1614113s)
	I0629 19:34:23.665968    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:23.865694    6596 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0629 19:34:23.865694    6596 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0629 19:34:23.865694    6596 command_runner.go:130] > <H1>302 Moved</H1>
	I0629 19:34:23.865694    6596 command_runner.go:130] > The document has moved
	I0629 19:34:23.865694    6596 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0629 19:34:23.865694    6596 command_runner.go:130] > </BODY></HTML>
	I0629 19:34:23.865694    6596 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3770489s)
	I0629 19:34:23.865694    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/systemd/system/cri-docker.service.d: (1.3685869s)
	I0629 19:34:23.865694    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0629 19:34:23.929543    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:34:24.133218    6596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 19:34:24.354320    6596 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 19:34:24.411616    6596 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0629 19:34:24.411616    6596 command_runner.go:130] > [Unit]
	I0629 19:34:24.411616    6596 command_runner.go:130] > Description=Docker Application Container Engine
	I0629 19:34:24.411709    6596 command_runner.go:130] > Documentation=https://docs.docker.com
	I0629 19:34:24.411709    6596 command_runner.go:130] > BindsTo=containerd.service
	I0629 19:34:24.411709    6596 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0629 19:34:24.411709    6596 command_runner.go:130] > Wants=network-online.target
	I0629 19:34:24.411795    6596 command_runner.go:130] > Requires=docker.socket
	I0629 19:34:24.411795    6596 command_runner.go:130] > StartLimitBurst=3
	I0629 19:34:24.411795    6596 command_runner.go:130] > StartLimitIntervalSec=60
	I0629 19:34:24.411795    6596 command_runner.go:130] > [Service]
	I0629 19:34:24.411795    6596 command_runner.go:130] > Type=notify
	I0629 19:34:24.411795    6596 command_runner.go:130] > Restart=on-failure
	I0629 19:34:24.411795    6596 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0629 19:34:24.411795    6596 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0629 19:34:24.411795    6596 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0629 19:34:24.411795    6596 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0629 19:34:24.411926    6596 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0629 19:34:24.411926    6596 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0629 19:34:24.411968    6596 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0629 19:34:24.411968    6596 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0629 19:34:24.412028    6596 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0629 19:34:24.412028    6596 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0629 19:34:24.412028    6596 command_runner.go:130] > ExecStart=
	I0629 19:34:24.412028    6596 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0629 19:34:24.412089    6596 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0629 19:34:24.412089    6596 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0629 19:34:24.412089    6596 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0629 19:34:24.412089    6596 command_runner.go:130] > LimitNOFILE=infinity
	I0629 19:34:24.412089    6596 command_runner.go:130] > LimitNPROC=infinity
	I0629 19:34:24.412089    6596 command_runner.go:130] > LimitCORE=infinity
	I0629 19:34:24.412162    6596 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0629 19:34:24.412162    6596 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0629 19:34:24.412162    6596 command_runner.go:130] > TasksMax=infinity
	I0629 19:34:24.412162    6596 command_runner.go:130] > TimeoutStartSec=0
	I0629 19:34:24.412215    6596 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0629 19:34:24.412215    6596 command_runner.go:130] > Delegate=yes
	I0629 19:34:24.412215    6596 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0629 19:34:24.412215    6596 command_runner.go:130] > KillMode=process
	I0629 19:34:24.412215    6596 command_runner.go:130] > [Install]
	I0629 19:34:24.412284    6596 command_runner.go:130] > WantedBy=multi-user.target
	I0629 19:34:24.412284    6596 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 19:34:24.423006    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 19:34:24.457010    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 19:34:24.507102    6596 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:34:24.507882    6596 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:34:24.521404    6596 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 19:34:24.714553    6596 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 19:34:24.897635    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:34:25.099633    6596 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 19:34:25.851278    6596 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 19:34:26.025863    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:34:26.227617    6596 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 19:34:26.259466    6596 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 19:34:26.269154    6596 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 19:34:26.289255    6596 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0629 19:34:26.289326    6596 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0629 19:34:26.289326    6596 command_runner.go:130] > Device: 100083h/1048707d	Inode: 111         Links: 1
	I0629 19:34:26.289397    6596 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0629 19:34:26.289397    6596 command_runner.go:130] > Access: 2022-06-29 19:34:25.162301000 +0000
	I0629 19:34:26.289397    6596 command_runner.go:130] > Modify: 2022-06-29 19:34:24.162301000 +0000
	I0629 19:34:26.289397    6596 command_runner.go:130] > Change: 2022-06-29 19:34:24.162301000 +0000
	I0629 19:34:26.289467    6596 command_runner.go:130] >  Birth: -
	I0629 19:34:26.289467    6596 start.go:468] Will wait 60s for crictl version
	I0629 19:34:26.299211    6596 ssh_runner.go:195] Run: sudo crictl version
	I0629 19:34:26.382283    6596 command_runner.go:130] > Version:  0.1.0
	I0629 19:34:26.382283    6596 command_runner.go:130] > RuntimeName:  docker
	I0629 19:34:26.382283    6596 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0629 19:34:26.383342    6596 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0629 19:34:26.383342    6596 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 19:34:26.394112    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:34:26.476468    6596 command_runner.go:130] > 20.10.17
	I0629 19:34:26.485982    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:34:26.568820    6596 command_runner.go:130] > 20.10.17
	I0629 19:34:26.574780    6596 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 19:34:26.576619    6596 out.go:177]   - env NO_PROXY=192.168.58.2
	I0629 19:34:26.585436    6596 cli_runner.go:164] Run: docker exec -t multinode-20220629191914-2408-m02 dig +short host.docker.internal
	I0629 19:34:27.913847    6596 cli_runner.go:217] Completed: docker exec -t multinode-20220629191914-2408-m02 dig +short host.docker.internal: (1.3282902s)
	I0629 19:34:27.913927    6596 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 19:34:27.924106    6596 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 19:34:27.939231    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:34:27.968203    6596 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408 for IP: 192.168.58.3
	I0629 19:34:27.970738    6596 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0629 19:34:27.972888    6596 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0629 19:34:27.972888    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0629 19:34:27.972888    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0629 19:34:27.972888    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0629 19:34:27.973505    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0629 19:34:27.974027    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem (1338 bytes)
	W0629 19:34:27.974782    6596 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408_empty.pem, impossibly tiny 0 bytes
	I0629 19:34:27.974782    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0629 19:34:27.974782    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0629 19:34:27.975488    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0629 19:34:27.975488    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0629 19:34:27.976200    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem (1708 bytes)
	I0629 19:34:27.976200    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /usr/share/ca-certificates/24082.pem
	I0629 19:34:27.976838    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:27.976838    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem -> /usr/share/ca-certificates/2408.pem
	I0629 19:34:27.977484    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 19:34:28.036582    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 19:34:28.091446    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 19:34:28.154840    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 19:34:28.220972    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /usr/share/ca-certificates/24082.pem (1708 bytes)
	I0629 19:34:28.275588    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 19:34:28.327354    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem --> /usr/share/ca-certificates/2408.pem (1338 bytes)
	I0629 19:34:28.393235    6596 ssh_runner.go:195] Run: openssl version
	I0629 19:34:28.411390    6596 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0629 19:34:28.420381    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 19:34:28.458908    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.470916    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.470916    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.478902    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.498254    6596 command_runner.go:130] > b5213941
	I0629 19:34:28.507188    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 19:34:28.542725    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2408.pem && ln -fs /usr/share/ca-certificates/2408.pem /etc/ssl/certs/2408.pem"
	I0629 19:34:28.580292    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.601488    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.601488    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.612274    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.635477    6596 command_runner.go:130] > 51391683
	I0629 19:34:28.645177    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2408.pem /etc/ssl/certs/51391683.0"
	I0629 19:34:28.682332    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24082.pem && ln -fs /usr/share/ca-certificates/24082.pem /etc/ssl/certs/24082.pem"
	I0629 19:34:28.719445    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.736430    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.736430    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.750590    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.766593    6596 command_runner.go:130] > 3ec20f2e
	I0629 19:34:28.775594    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24082.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 19:34:28.806764    6596 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 19:34:28.975948    6596 command_runner.go:130] > cgroupfs
	I0629 19:34:28.976148    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:34:28.976148    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:34:28.976148    6596 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 19:34:28.976148    6596 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220629191914-2408 NodeName:multinode-20220629191914-2408-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 19:34:28.976148    6596 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220629191914-2408-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 19:34:28.976148    6596 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220629191914-2408-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 19:34:28.986833    6596 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 19:34:29.011169    6596 command_runner.go:130] > kubeadm
	I0629 19:34:29.011169    6596 command_runner.go:130] > kubectl
	I0629 19:34:29.011169    6596 command_runner.go:130] > kubelet
	I0629 19:34:29.013777    6596 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 19:34:29.025907    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0629 19:34:29.050533    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (495 bytes)
	I0629 19:34:29.092747    6596 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 19:34:29.146994    6596 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0629 19:34:29.160609    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:34:29.190999    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:34:29.191968    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:34:29.192015    6596 start.go:282] JoinCluster: &{Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:
false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:34:29.192099    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0629 19:34:29.199772    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:34:30.323895    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1241161s)
	I0629 19:34:30.323895    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:30.609186    6596 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f 
	I0629 19:34:30.609186    6596 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm token create --print-join-command --ttl=0": (1.4170769s)
	I0629 19:34:30.609186    6596 start.go:295] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:30.609186    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:34:30.621393    6596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl drain multinode-20220629191914-2408-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0629 19:34:30.627382    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:34:31.800341    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1728357s)
	I0629 19:34:31.800710    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:31.953383    6596 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0629 19:34:32.200788    6596 command_runner.go:130] ! WARNING: ignoring DaemonSet-managed Pods: kube-system/kindnet-q54ld, kube-system/kube-proxy-2mz9l
	I0629 19:34:35.238436    6596 command_runner.go:130] > node/multinode-20220629191914-2408-m02 cordoned
	I0629 19:34:35.238436    6596 command_runner.go:130] > pod "busybox-d46db594c-rbqbj" has DeletionTimestamp older than 1 seconds, skipping
	I0629 19:34:35.238436    6596 command_runner.go:130] > node/multinode-20220629191914-2408-m02 drained
	I0629 19:34:35.238436    6596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl drain multinode-20220629191914-2408-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.6170122s)
	I0629 19:34:35.238436    6596 node.go:109] successfully drained node "m02"
	I0629 19:34:35.239533    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:34:35.240166    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\ca.crt", CertData:[]u
int8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:34:35.240919    6596 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0629 19:34:35.240968    6596 round_trippers.go:463] DELETE https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:34:35.240968    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:35.240968    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:35.240968    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:35.240968    6596 round_trippers.go:473]     Content-Type: application/json
	I0629 19:34:35.252620    6596 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0629 19:34:35.252620    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Audit-Id: 28c5810d-2bf9-42ed-9e57-cafbcadc30f0
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:35.252620    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:35.252620    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Content-Length: 184
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:35 GMT
	I0629 19:34:35.253256    6596 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220629191914-2408-m02","kind":"nodes","uid":"aaf41655-3991-4e63-82df-36b045e3e43c"}}
	I0629 19:34:35.253461    6596 node.go:125] successfully deleted node "m02"
	I0629 19:34:35.253496    6596 start.go:299] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:35.253589    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:35.253735    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:34:35.358034    6596 command_runner.go:130] ! W0629 19:34:35.347249    1345 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:34:35.358097    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:34:35.412194    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:34:35.642510    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:34:35.642510    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:34:36.039874    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:34:36.040030    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.051131    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:34:36.051131    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:34:36.051131    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:34:36.051234    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:35.347249    1345 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.051234    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:34:36.051324    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:34:36.142668    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:34:36.142668    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.142668    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.142668    6596 retry.go:31] will retry after 9.377141872s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:35.347249    1345 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:45.534152    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:45.534234    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:34:45.622611    6596 command_runner.go:130] ! W0629 19:34:45.619485    1457 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:34:45.622611    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:34:45.671472    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:34:45.848185    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:34:45.848185    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:34:45.909019    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:34:45.909113    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:45.919286    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:34:45.919354    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:34:45.919354    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:34:45.919420    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:45.619485    1457 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:45.919495    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:34:45.919495    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:34:46.006561    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:34:46.006599    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:46.012340    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:46.013589    6596 retry.go:31] will retry after 13.869562456s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:45.619485    1457 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:59.893939    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:59.893939    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:35:00.013491    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:35:00.284553    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:35:00.284553    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0629 19:35:00.329128    6596 command_runner.go:130] ! W0629 19:35:00.010255    1984 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:35:00.329128    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:35:00.329128    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0629 19:35:00.329128    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:00.010255    1984 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:00.329128    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:35:00.330946    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:35:00.410019    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:35:00.410124    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:00.419303    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:00.419338    6596 retry.go:31] will retry after 26.70351914s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:00.010255    1984 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.130241    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:35:27.130481    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:35:27.226228    6596 command_runner.go:130] ! W0629 19:35:27.222769    2245 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:35:27.226228    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:35:27.276734    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:35:27.446017    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:35:27.446017    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:35:27.551760    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:35:27.552352    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.559297    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:35:27.559297    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:35:27.559297    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:35:27.559892    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:27.222769    2245 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.559892    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:35:27.560007    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:35:27.638888    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:35:27.638966    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.647460    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.647460    6596 retry.go:31] will retry after 19.090249398s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:27.222769    2245 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:46.739301    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:35:46.739614    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:35:46.834894    6596 command_runner.go:130] ! W0629 19:35:46.831145    2423 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:35:46.834894    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:35:46.884700    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:35:47.044196    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:35:47.044196    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:35:47.111129    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:35:47.111129    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.119061    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:35:47.119061    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:35:47.119158    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:35:47.119236    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:46.831145    2423 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.119274    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:35:47.119274    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:35:47.196599    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:35:47.197137    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.203083    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.203083    6596 retry.go:31] will retry after 33.236287271s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:46.831145    2423 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.442888    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:36:20.443182    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:36:20.530213    6596 command_runner.go:130] ! W0629 19:36:20.526892    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:36:20.530325    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:36:20.584431    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:36:20.753890    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:36:20.754069    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:36:20.828783    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:36:20.828783    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.840911    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:36:20.841017    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:36:20.841045    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:36:20.841151    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:20.526892    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.841151    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:36:20.841285    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:36:20.942318    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:36:20.942318    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.951094    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.951094    6596 retry.go:31] will retry after 35.818171134s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:20.526892    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:56.780548    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:36:56.780836    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:36:56.880463    6596 command_runner.go:130] ! W0629 19:36:56.878437    3079 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:36:56.880463    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:36:56.933515    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:36:57.085863    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:36:57.085965    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:36:57.154133    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:36:57.154133    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.161847    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:36:57.161847    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:36:57.161847    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:36:57.162480    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:56.878437    3079 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.162480    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:36:57.162480    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:36:57.251252    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:36:57.251340    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.259991    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.259991    6596 start.go:284] JoinCluster complete in 2m28.0670235s
	I0629 19:36:57.263518    6596 out.go:177] 
	W0629 19:36:57.266395    6596 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:56.878437    3079 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:56.878437    3079 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 19:36:57.266395    6596 out.go:239] * 
	* 
	W0629 19:36:57.267573    6596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 19:36:57.269632    6596 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-20220629191914-2408" : exit status 80
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220629191914-2408
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220629191914-2408
helpers_test.go:231: (dbg) Done: docker inspect multinode-20220629191914-2408: (1.0992372s)
helpers_test.go:235: (dbg) docker inspect multinode-20220629191914-2408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0",
	        "Created": "2022-06-29T19:20:10.8281833Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 132742,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T19:33:03.1161903Z",
	            "FinishedAt": "2022-06-29T19:32:28.8367125Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0/hosts",
	        "LogPath": "/var/lib/docker/containers/b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0/b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0-json.log",
	        "Name": "/multinode-20220629191914-2408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220629191914-2408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220629191914-2408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69ee8614dfc9ab5776b55ff9841059ef26030b822a1acb3378867d5c1be35718-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69ee8614dfc9ab5776b55ff9841059ef26030b822a1acb3378867d5c1be35718/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69ee8614dfc9ab5776b55ff9841059ef26030b822a1acb3378867d5c1be35718/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69ee8614dfc9ab5776b55ff9841059ef26030b822a1acb3378867d5c1be35718/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-20220629191914-2408",
	                "Source": "/var/lib/docker/volumes/multinode-20220629191914-2408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220629191914-2408",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220629191914-2408",
	                "name.minikube.sigs.k8s.io": "multinode-20220629191914-2408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "03daaba0dedfb37568591213e08a07b95ef45c25cb9e5c1f5ead2a82ea1a2697",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54820"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54821"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54823"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54819"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/03daaba0dedf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220629191914-2408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b554e1949a1a",
	                        "multinode-20220629191914-2408"
	                    ],
	                    "NetworkID": "cef700f66abab44b14d4568e6b94edb798763298185a365d4a76a66981f14859",
	                    "EndpointID": "416419152eeed3d5125052411ab85a565db8f6f14cd1c39c2ee872248fdb89e6",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220629191914-2408 -n multinode-20220629191914-2408
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20220629191914-2408 -n multinode-20220629191914-2408: (7.5621628s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 logs -n 25: (8.6559661s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                                 Args                                                                  | Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:28 GMT | 29 Jun 22 19:28 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m02                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| cp      | multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:28 GMT | 29 Jun 22 19:28 GMT |
	|         | C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408-m02.txt |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:28 GMT | 29 Jun 22 19:28 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m02                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| cp      | multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:28 GMT | 29 Jun 22 19:28 GMT |
	|         | multinode-20220629191914-2408:/home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408.txt                |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:28 GMT | 29 Jun 22 19:28 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m02                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 sudo cat                                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:28 GMT | 29 Jun 22 19:29 GMT |
	|         | /home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408.txt                                              |          |                   |         |                     |                     |
	| cp      | multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:29 GMT |
	|         | multinode-20220629191914-2408-m03:/home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408-m03.txt        |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:29 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m02                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 sudo cat                                                       | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:29 GMT |
	|         | /home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408-m03.txt                                          |          |                   |         |                     |                     |
	| cp      | multinode-20220629191914-2408 cp testdata\cp-test.txt                                                                                 | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:29 GMT |
	|         | multinode-20220629191914-2408-m03:/home/docker/cp-test.txt                                                                            |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:29 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m03                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| cp      | multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:29 GMT |
	|         | C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408-m03.txt |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:29 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m03                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| cp      | multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:29 GMT | 29 Jun 22 19:30 GMT |
	|         | multinode-20220629191914-2408:/home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408.txt                |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:30 GMT | 29 Jun 22 19:30 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m03                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 sudo cat                                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:30 GMT | 29 Jun 22 19:30 GMT |
	|         | /home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408.txt                                              |          |                   |         |                     |                     |
	| cp      | multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt                                           | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:30 GMT | 29 Jun 22 19:30 GMT |
	|         | multinode-20220629191914-2408-m02:/home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408-m02.txt        |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:30 GMT | 29 Jun 22 19:30 GMT |
	|         | ssh -n                                                                                                                                |          |                   |         |                     |                     |
	|         | multinode-20220629191914-2408-m03                                                                                                     |          |                   |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                     |          |                   |         |                     |                     |
	| ssh     | multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 sudo cat                                                       | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:30 GMT | 29 Jun 22 19:30 GMT |
	|         | /home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408-m02.txt                                          |          |                   |         |                     |                     |
	| node    | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:30 GMT | 29 Jun 22 19:30 GMT |
	|         | node stop m03                                                                                                                         |          |                   |         |                     |                     |
	| node    | multinode-20220629191914-2408                                                                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:31 GMT | 29 Jun 22 19:31 GMT |
	|         | node start m03                                                                                                                        |          |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                                     |          |                   |         |                     |                     |
	| node    | list -p                                                                                                                               | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:32 GMT |                     |
	|         | multinode-20220629191914-2408                                                                                                         |          |                   |         |                     |                     |
	| stop    | -p                                                                                                                                    | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:32 GMT | 29 Jun 22 19:32 GMT |
	|         | multinode-20220629191914-2408                                                                                                         |          |                   |         |                     |                     |
	| start   | -p                                                                                                                                    | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:32 GMT |                     |
	|         | multinode-20220629191914-2408                                                                                                         |          |                   |         |                     |                     |
	|         | --wait=true -v=8                                                                                                                      |          |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                                     |          |                   |         |                     |                     |
	| node    | list -p                                                                                                                               | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 19:36 GMT |                     |
	|         | multinode-20220629191914-2408                                                                                                         |          |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 19:32:51
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 19:32:51.363731    6596 out.go:296] Setting OutFile to fd 1008 ...
	I0629 19:32:51.420577    6596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:32:51.420577    6596 out.go:309] Setting ErrFile to fd 568...
	I0629 19:32:51.420651    6596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:32:51.441230    6596 out.go:303] Setting JSON to false
	I0629 19:32:51.442731    6596 start.go:115] hostinfo: {"hostname":"minikube8","uptime":23733,"bootTime":1656507438,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 19:32:51.443741    6596 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 19:32:51.448090    6596 out.go:177] * [multinode-20220629191914-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 19:32:51.451318    6596 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:32:51.451115    6596 notify.go:193] Checking for updates...
	I0629 19:32:51.455322    6596 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 19:32:51.458212    6596 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 19:32:51.460283    6596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 19:32:51.463282    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:32:51.463282    6596 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 19:32:54.624687    6596 docker.go:137] docker version: linux-20.10.16
	I0629 19:32:54.634957    6596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 19:32:56.682184    6596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0472133s)
	I0629 19:32:56.682184    6596 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:52 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-29 19:32:55.6605854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 19:32:56.687470    6596 out.go:177] * Using the docker driver based on existing profile
	I0629 19:32:56.690727    6596 start.go:284] selected driver: docker
	I0629 19:32:56.690727    6596 start.go:808] validating driver "docker" against &{Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:
false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:32:56.690727    6596 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 19:32:56.703181    6596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 19:32:58.765755    6596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0624485s)
	I0629 19:32:58.766026    6596 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:52 OomKillDisable:true NGoroutines:47 SystemTime:2022-06-29 19:32:57.7553919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 19:32:58.873644    6596 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 19:32:58.873644    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:32:58.873644    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:32:58.873644    6596 start_flags.go:310] config:
	{Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer
:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:32:58.877509    6596 out.go:177] * Starting control plane node multinode-20220629191914-2408 in cluster multinode-20220629191914-2408
	I0629 19:32:58.883618    6596 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 19:32:58.886166    6596 out.go:177] * Pulling base image ...
	I0629 19:32:58.888791    6596 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 19:32:58.888791    6596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 19:32:58.889635    6596 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 19:32:58.889635    6596 cache.go:57] Caching tarball of preloaded images
	I0629 19:32:58.889971    6596 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 19:32:58.889971    6596 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 19:32:58.889971    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:32:59.988163    6596 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 19:32:59.988236    6596 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 19:32:59.988236    6596 cache.go:208] Successfully downloaded all kic artifacts
	I0629 19:32:59.988396    6596 start.go:352] acquiring machines lock for multinode-20220629191914-2408: {Name:mk34f398a922278a637dbc30fba078e459217922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 19:32:59.988668    6596 start.go:356] acquired machines lock for "multinode-20220629191914-2408" in 162.2µs
	I0629 19:32:59.988822    6596 start.go:94] Skipping create...Using existing machine configuration
	I0629 19:32:59.988890    6596 fix.go:55] fixHost starting: 
	I0629 19:33:00.002328    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:01.101332    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.0989966s)
	I0629 19:33:01.101332    6596 fix.go:103] recreateIfNeeded on multinode-20220629191914-2408: state=Stopped err=<nil>
	W0629 19:33:01.101332    6596 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 19:33:01.113482    6596 out.go:177] * Restarting existing docker container for "multinode-20220629191914-2408" ...
	I0629 19:33:01.122532    6596 cli_runner.go:164] Run: docker start multinode-20220629191914-2408
	I0629 19:33:03.191406    6596 cli_runner.go:217] Completed: docker start multinode-20220629191914-2408: (2.0687984s)
	I0629 19:33:03.199634    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:04.353236    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.1533981s)
	I0629 19:33:04.353307    6596 kic.go:416] container "multinode-20220629191914-2408" state is running.
	I0629 19:33:04.363125    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408
	I0629 19:33:05.592231    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408: (1.2289949s)
	I0629 19:33:05.592417    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:33:05.595170    6596 machine.go:88] provisioning docker machine ...
	I0629 19:33:05.595170    6596 ubuntu.go:169] provisioning hostname "multinode-20220629191914-2408"
	I0629 19:33:05.605073    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:06.783003    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1779222s)
	I0629 19:33:06.786870    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:06.787712    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:06.787712    6596 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220629191914-2408 && echo "multinode-20220629191914-2408" | sudo tee /etc/hostname
	I0629 19:33:07.009622    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220629191914-2408
	
	I0629 19:33:07.021166    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:08.130977    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1096318s)
	I0629 19:33:08.142928    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:08.143386    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:08.143386    6596 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220629191914-2408' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220629191914-2408/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220629191914-2408' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 19:33:08.285161    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:33:08.285161    6596 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0629 19:33:08.285161    6596 ubuntu.go:177] setting up certificates
	I0629 19:33:08.285161    6596 provision.go:83] configureAuth start
	I0629 19:33:08.293286    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408
	I0629 19:33:09.404816    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408: (1.1112964s)
	I0629 19:33:09.404898    6596 provision.go:138] copyHostCerts
	I0629 19:33:09.405079    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem
	I0629 19:33:09.405389    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0629 19:33:09.405420    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0629 19:33:09.405893    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0629 19:33:09.406762    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem
	I0629 19:33:09.406762    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0629 19:33:09.406762    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0629 19:33:09.407455    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0629 19:33:09.408234    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem
	I0629 19:33:09.408511    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0629 19:33:09.408552    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0629 19:33:09.408851    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0629 19:33:09.409423    6596 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-20220629191914-2408 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220629191914-2408]
	I0629 19:33:09.921504    6596 provision.go:172] copyRemoteCerts
	I0629 19:33:09.930872    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 19:33:09.937589    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:11.085747    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1480323s)
	I0629 19:33:11.086873    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:11.241344    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3104212s)
	I0629 19:33:11.241438    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0629 19:33:11.241761    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 19:33:11.298284    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0629 19:33:11.299659    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	I0629 19:33:11.348835    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0629 19:33:11.349294    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 19:33:11.409618    6596 provision.go:86] duration metric: configureAuth took 3.124436s
	I0629 19:33:11.409618    6596 ubuntu.go:193] setting minikube options for container-runtime
	I0629 19:33:11.410452    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:33:11.418097    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:12.522305    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1039232s)
	I0629 19:33:12.525919    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:12.526621    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:12.526621    6596 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 19:33:12.730790    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 19:33:12.731891    6596 ubuntu.go:71] root file system type: overlay
	I0629 19:33:12.732387    6596 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 19:33:12.740400    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:13.834290    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.0936946s)
	I0629 19:33:13.838461    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:13.838818    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:13.838818    6596 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 19:33:14.082468    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 19:33:14.091908    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:15.185685    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.093769s)
	I0629 19:33:15.188054    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:33:15.188054    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54820 <nil> <nil>}
	I0629 19:33:15.188054    6596 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 19:33:15.415528    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:33:15.415528    6596 machine.go:91] provisioned docker machine in 9.8202924s
	I0629 19:33:15.415528    6596 start.go:306] post-start starting for "multinode-20220629191914-2408" (driver="docker")
	I0629 19:33:15.415528    6596 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 19:33:15.426094    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 19:33:15.433565    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:16.555792    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1220767s)
	I0629 19:33:16.556471    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:16.703930    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2778275s)
	I0629 19:33:16.713914    6596 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 19:33:16.726829    6596 command_runner.go:130] > NAME="Ubuntu"
	I0629 19:33:16.726829    6596 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0629 19:33:16.726829    6596 command_runner.go:130] > ID=ubuntu
	I0629 19:33:16.726829    6596 command_runner.go:130] > ID_LIKE=debian
	I0629 19:33:16.726829    6596 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0629 19:33:16.726829    6596 command_runner.go:130] > VERSION_ID="20.04"
	I0629 19:33:16.726829    6596 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0629 19:33:16.726829    6596 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0629 19:33:16.726829    6596 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0629 19:33:16.726829    6596 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0629 19:33:16.726829    6596 command_runner.go:130] > VERSION_CODENAME=focal
	I0629 19:33:16.726829    6596 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0629 19:33:16.726829    6596 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 19:33:16.726829    6596 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 19:33:16.726829    6596 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 19:33:16.726829    6596 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 19:33:16.726829    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0629 19:33:16.727400    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0629 19:33:16.728093    6596 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> 24082.pem in /etc/ssl/certs
	I0629 19:33:16.728136    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /etc/ssl/certs/24082.pem
	I0629 19:33:16.738930    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 19:33:16.770570    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /etc/ssl/certs/24082.pem (1708 bytes)
	I0629 19:33:16.824387    6596 start.go:309] post-start completed in 1.4088489s
	I0629 19:33:16.834038    6596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 19:33:16.840648    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:17.928283    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.0876279s)
	I0629 19:33:17.928283    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:18.051017    6596 command_runner.go:130] > 5%!
	(MISSING)I0629 19:33:18.051017    6596 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.216971s)
	I0629 19:33:18.060981    6596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 19:33:18.075047    6596 command_runner.go:130] > 227G
	I0629 19:33:18.075451    6596 fix.go:57] fixHost completed within 18.086477s
	I0629 19:33:18.075527    6596 start.go:81] releasing machines lock for "multinode-20220629191914-2408", held for 18.0867122s
	I0629 19:33:18.083577    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408
	I0629 19:33:19.181084    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408: (1.0974995s)
	I0629 19:33:19.183571    6596 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 19:33:19.191201    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:19.191987    6596 ssh_runner.go:195] Run: systemctl --version
	I0629 19:33:19.199288    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:20.304272    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1049764s)
	I0629 19:33:20.304801    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:20.327933    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1367243s)
	I0629 19:33:20.328587    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:33:20.430078    6596 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0629 19:33:20.430624    6596 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0629 19:33:20.430690    6596 ssh_runner.go:235] Completed: systemctl --version: (1.2386294s)
	I0629 19:33:20.441100    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 19:33:20.548145    6596 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0629 19:33:20.548145    6596 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0629 19:33:20.548145    6596 command_runner.go:130] > <H1>302 Moved</H1>
	I0629 19:33:20.548145    6596 command_runner.go:130] > The document has moved
	I0629 19:33:20.548145    6596 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0629 19:33:20.548145    6596 command_runner.go:130] > </BODY></HTML>
	I0629 19:33:20.548145    6596 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.364565s)
	I0629 19:33:20.548145    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0629 19:33:20.602145    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:33:20.764194    6596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 19:33:20.966620    6596 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > [Unit]
	I0629 19:33:21.024232    6596 command_runner.go:130] > Description=Docker Application Container Engine
	I0629 19:33:21.024232    6596 command_runner.go:130] > Documentation=https://docs.docker.com
	I0629 19:33:21.024232    6596 command_runner.go:130] > BindsTo=containerd.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0629 19:33:21.024232    6596 command_runner.go:130] > Wants=network-online.target
	I0629 19:33:21.024232    6596 command_runner.go:130] > Requires=docker.socket
	I0629 19:33:21.024232    6596 command_runner.go:130] > StartLimitBurst=3
	I0629 19:33:21.024232    6596 command_runner.go:130] > StartLimitIntervalSec=60
	I0629 19:33:21.024232    6596 command_runner.go:130] > [Service]
	I0629 19:33:21.024232    6596 command_runner.go:130] > Type=notify
	I0629 19:33:21.024232    6596 command_runner.go:130] > Restart=on-failure
	I0629 19:33:21.024232    6596 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0629 19:33:21.024232    6596 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0629 19:33:21.024232    6596 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0629 19:33:21.024232    6596 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0629 19:33:21.024232    6596 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0629 19:33:21.024232    6596 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0629 19:33:21.024232    6596 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0629 19:33:21.024232    6596 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0629 19:33:21.024232    6596 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0629 19:33:21.024232    6596 command_runner.go:130] > ExecStart=
	I0629 19:33:21.024232    6596 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0629 19:33:21.024232    6596 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0629 19:33:21.024232    6596 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0629 19:33:21.024232    6596 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0629 19:33:21.024232    6596 command_runner.go:130] > LimitNOFILE=infinity
	I0629 19:33:21.024775    6596 command_runner.go:130] > LimitNPROC=infinity
	I0629 19:33:21.024775    6596 command_runner.go:130] > LimitCORE=infinity
	I0629 19:33:21.024775    6596 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0629 19:33:21.024775    6596 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0629 19:33:21.024775    6596 command_runner.go:130] > TasksMax=infinity
	I0629 19:33:21.024840    6596 command_runner.go:130] > TimeoutStartSec=0
	I0629 19:33:21.024840    6596 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0629 19:33:21.024840    6596 command_runner.go:130] > Delegate=yes
	I0629 19:33:21.024840    6596 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0629 19:33:21.024840    6596 command_runner.go:130] > KillMode=process
	I0629 19:33:21.024840    6596 command_runner.go:130] > [Install]
	I0629 19:33:21.024840    6596 command_runner.go:130] > WantedBy=multi-user.target
	I0629 19:33:21.024840    6596 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 19:33:21.034586    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 19:33:21.063598    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 19:33:21.106554    6596 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:33:21.106554    6596 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:33:21.120154    6596 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 19:33:21.307523    6596 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 19:33:21.487486    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:33:21.656038    6596 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 19:33:22.563262    6596 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 19:33:22.718542    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:33:22.890057    6596 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 19:33:22.919227    6596 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 19:33:22.928911    6596 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 19:33:22.942408    6596 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0629 19:33:22.942408    6596 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0629 19:33:22.942408    6596 command_runner.go:130] > Device: d0h/208d	Inode: 104         Links: 1
	I0629 19:33:22.942408    6596 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0629 19:33:22.942408    6596 command_runner.go:130] > Access: 2022-06-29 19:33:20.778642000 +0000
	I0629 19:33:22.942408    6596 command_runner.go:130] > Modify: 2022-06-29 19:33:20.778642000 +0000
	I0629 19:33:22.942408    6596 command_runner.go:130] > Change: 2022-06-29 19:33:20.778642000 +0000
	I0629 19:33:22.942408    6596 command_runner.go:130] >  Birth: -
	I0629 19:33:22.942408    6596 start.go:468] Will wait 60s for crictl version
	I0629 19:33:22.952518    6596 ssh_runner.go:195] Run: sudo crictl version
	I0629 19:33:23.033726    6596 command_runner.go:130] > Version:  0.1.0
	I0629 19:33:23.033726    6596 command_runner.go:130] > RuntimeName:  docker
	I0629 19:33:23.033726    6596 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0629 19:33:23.033726    6596 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0629 19:33:23.034252    6596 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 19:33:23.042543    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:33:23.125025    6596 command_runner.go:130] > 20.10.17
	I0629 19:33:23.138882    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:33:23.222698    6596 command_runner.go:130] > 20.10.17
	I0629 19:33:23.227737    6596 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 19:33:23.240448    6596 cli_runner.go:164] Run: docker exec -t multinode-20220629191914-2408 dig +short host.docker.internal
	I0629 19:33:24.551451    6596 cli_runner.go:217] Completed: docker exec -t multinode-20220629191914-2408 dig +short host.docker.internal: (1.3109935s)
	I0629 19:33:24.551451    6596 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 19:33:24.561161    6596 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 19:33:24.571675    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:33:24.604042    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:25.690173    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.0861243s)
	I0629 19:33:25.690173    6596 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 19:33:25.698242    6596 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.2
	I0629 19:33:25.774482    6596 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0629 19:33:25.774482    6596 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0629 19:33:25.775027    6596 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 19:33:25.775027    6596 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0629 19:33:25.775027    6596 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 19:33:25.775080    6596 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0629 19:33:25.775109    6596 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0629 19:33:25.775220    6596 docker.go:533] Images already preloaded, skipping extraction
	I0629 19:33:25.782846    6596 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 19:33:25.860779    6596 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 19:33:25.860779    6596 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.2
	I0629 19:33:25.860862    6596 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 19:33:25.860862    6596 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 19:33:25.860862    6596 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 19:33:25.860905    6596 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0629 19:33:25.860905    6596 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 19:33:25.860905    6596 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0629 19:33:25.860905    6596 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0629 19:33:25.860905    6596 cache_images.go:84] Images are preloaded, skipping loading
	I0629 19:33:25.868706    6596 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 19:33:26.035859    6596 command_runner.go:130] > cgroupfs
	I0629 19:33:26.042038    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:33:26.042038    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:33:26.042190    6596 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 19:33:26.042220    6596 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220629191914-2408 NodeName:multinode-20220629191914-2408 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 19:33:26.042425    6596 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220629191914-2408"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 19:33:26.042579    6596 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220629191914-2408 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 19:33:26.052994    6596 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 19:33:26.075724    6596 command_runner.go:130] > kubeadm
	I0629 19:33:26.075724    6596 command_runner.go:130] > kubectl
	I0629 19:33:26.075724    6596 command_runner.go:130] > kubelet
	I0629 19:33:26.078971    6596 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 19:33:26.090603    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 19:33:26.117892    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (491 bytes)
	I0629 19:33:26.159337    6596 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 19:33:26.195569    6596 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0629 19:33:26.243295    6596 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0629 19:33:26.263420    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:33:26.294986    6596 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408 for IP: 192.168.58.2
	I0629 19:33:26.295633    6596 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0629 19:33:26.296006    6596 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0629 19:33:26.296648    6596 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\client.key
	I0629 19:33:26.296803    6596 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.key.cee25041
	I0629 19:33:26.296803    6596 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.key
	I0629 19:33:26.296803    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0629 19:33:26.297337    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0629 19:33:26.297393    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0629 19:33:26.298034    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0629 19:33:26.298180    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0629 19:33:26.298823    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem (1338 bytes)
	W0629 19:33:26.298823    6596 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408_empty.pem, impossibly tiny 0 bytes
	I0629 19:33:26.298823    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0629 19:33:26.299352    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0629 19:33:26.299672    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0629 19:33:26.299842    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0629 19:33:26.300340    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem (1708 bytes)
	I0629 19:33:26.300591    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:26.300794    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem -> /usr/share/ca-certificates/2408.pem
	I0629 19:33:26.300876    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /usr/share/ca-certificates/24082.pem
	I0629 19:33:26.301532    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 19:33:26.361120    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 19:33:26.412721    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 19:33:26.463053    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 19:33:26.517668    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 19:33:26.570170    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 19:33:26.631493    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 19:33:26.687125    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 19:33:26.742487    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 19:33:26.795580    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem --> /usr/share/ca-certificates/2408.pem (1338 bytes)
	I0629 19:33:26.852347    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /usr/share/ca-certificates/24082.pem (1708 bytes)
	I0629 19:33:26.899476    6596 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 19:33:26.955891    6596 ssh_runner.go:195] Run: openssl version
	I0629 19:33:26.971630    6596 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0629 19:33:26.982659    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 19:33:27.018590    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.032874    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.032874    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.043885    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:33:27.069292    6596 command_runner.go:130] > b5213941
	I0629 19:33:27.080220    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 19:33:27.120059    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2408.pem && ln -fs /usr/share/ca-certificates/2408.pem /etc/ssl/certs/2408.pem"
	I0629 19:33:27.159419    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.173270    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.173835    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.183792    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2408.pem
	I0629 19:33:27.202477    6596 command_runner.go:130] > 51391683
	I0629 19:33:27.215423    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2408.pem /etc/ssl/certs/51391683.0"
	I0629 19:33:27.252461    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24082.pem && ln -fs /usr/share/ca-certificates/24082.pem /etc/ssl/certs/24082.pem"
	I0629 19:33:27.298227    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.312945    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.313023    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.323612    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24082.pem
	I0629 19:33:27.342921    6596 command_runner.go:130] > 3ec20f2e
	I0629 19:33:27.352573    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24082.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 19:33:27.377959    6596 kubeadm.go:395] StartCluster: {Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:33:27.388125    6596 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 19:33:27.466070    6596 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 19:33:27.491492    6596 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0629 19:33:27.491492    6596 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0629 19:33:27.491492    6596 command_runner.go:130] > /var/lib/minikube/etcd:
	I0629 19:33:27.491492    6596 command_runner.go:130] > member
	I0629 19:33:27.491492    6596 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 19:33:27.491492    6596 kubeadm.go:626] restartCluster start
	I0629 19:33:27.507017    6596 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 19:33:27.531094    6596 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:27.539212    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:28.645686    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1064667s)
	I0629 19:33:28.646507    6596 kubeconfig.go:116] verify returned: extract IP: "multinode-20220629191914-2408" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:28.646507    6596 kubeconfig.go:127] "multinode-20220629191914-2408" context is missing from C:\Users\jenkins.minikube8\minikube-integration\kubeconfig - will repair!
	I0629 19:33:28.647397    6596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 19:33:28.656533    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:28.657186    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408/client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408/client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube/ca.crt", CertData:[]uint
8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:33:28.658610    6596 cert_rotation.go:137] Starting client certificate rotation controller
	I0629 19:33:28.667032    6596 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 19:33:28.691244    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:28.701602    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:28.731867    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:28.932345    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:28.942495    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:28.972338    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.134447    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.144207    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.170953    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.345951    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.355395    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.381607    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.537051    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.547209    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.580325    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.740694    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.750407    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.781736    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:29.931876    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:29.942159    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:29.977096    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.133451    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.143861    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.174803    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.334525    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.344893    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.372464    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.539432    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.548599    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.576521    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.737497    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.747771    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.775521    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:30.946471    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:30.956071    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:30.984296    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.138788    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.149395    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.185735    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.346158    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.356459    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.387909    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.532067    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.542887    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.569445    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.734625    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.745081    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.777869    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.777900    6596 api_server.go:165] Checking apiserver status ...
	I0629 19:33:31.787744    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 19:33:31.817645    6596 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:31.817755    6596 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 19:33:31.817755    6596 kubeadm.go:1092] stopping kube-system containers ...
	I0629 19:33:31.825601    6596 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 19:33:31.910287    6596 command_runner.go:130] > 8b3c86d0a1c5
	I0629 19:33:31.910345    6596 command_runner.go:130] > f0ca10825934
	I0629 19:33:31.910345    6596 command_runner.go:130] > 35d237e18d31
	I0629 19:33:31.910345    6596 command_runner.go:130] > fbf6b6b051d1
	I0629 19:33:31.910345    6596 command_runner.go:130] > 4a8fd7455c69
	I0629 19:33:31.910345    6596 command_runner.go:130] > a474d425b0e4
	I0629 19:33:31.910385    6596 command_runner.go:130] > 01dc6840c9af
	I0629 19:33:31.910385    6596 command_runner.go:130] > 677fc6b0f18a
	I0629 19:33:31.910385    6596 command_runner.go:130] > d7c2cbf71616
	I0629 19:33:31.910418    6596 command_runner.go:130] > 1da5e66d6e61
	I0629 19:33:31.910418    6596 command_runner.go:130] > 08172ec4cee1
	I0629 19:33:31.910418    6596 command_runner.go:130] > 72903587275b
	I0629 19:33:31.910418    6596 command_runner.go:130] > aafba86db102
	I0629 19:33:31.910418    6596 command_runner.go:130] > 2b45ac9da375
	I0629 19:33:31.910418    6596 command_runner.go:130] > 2bebeee868d5
	I0629 19:33:31.910418    6596 command_runner.go:130] > 0870274494db
	I0629 19:33:31.910418    6596 docker.go:434] Stopping containers: [8b3c86d0a1c5 f0ca10825934 35d237e18d31 fbf6b6b051d1 4a8fd7455c69 a474d425b0e4 01dc6840c9af 677fc6b0f18a d7c2cbf71616 1da5e66d6e61 08172ec4cee1 72903587275b aafba86db102 2b45ac9da375 2bebeee868d5 0870274494db]
	I0629 19:33:31.918751    6596 ssh_runner.go:195] Run: docker stop 8b3c86d0a1c5 f0ca10825934 35d237e18d31 fbf6b6b051d1 4a8fd7455c69 a474d425b0e4 01dc6840c9af 677fc6b0f18a d7c2cbf71616 1da5e66d6e61 08172ec4cee1 72903587275b aafba86db102 2b45ac9da375 2bebeee868d5 0870274494db
	I0629 19:33:31.994337    6596 command_runner.go:130] > 8b3c86d0a1c5
	I0629 19:33:31.994337    6596 command_runner.go:130] > f0ca10825934
	I0629 19:33:31.994337    6596 command_runner.go:130] > 35d237e18d31
	I0629 19:33:31.994337    6596 command_runner.go:130] > fbf6b6b051d1
	I0629 19:33:31.994337    6596 command_runner.go:130] > 4a8fd7455c69
	I0629 19:33:31.994337    6596 command_runner.go:130] > a474d425b0e4
	I0629 19:33:31.994337    6596 command_runner.go:130] > 01dc6840c9af
	I0629 19:33:31.994337    6596 command_runner.go:130] > 677fc6b0f18a
	I0629 19:33:31.994337    6596 command_runner.go:130] > d7c2cbf71616
	I0629 19:33:31.994337    6596 command_runner.go:130] > 1da5e66d6e61
	I0629 19:33:31.994337    6596 command_runner.go:130] > 08172ec4cee1
	I0629 19:33:31.994337    6596 command_runner.go:130] > 72903587275b
	I0629 19:33:31.994337    6596 command_runner.go:130] > aafba86db102
	I0629 19:33:31.994337    6596 command_runner.go:130] > 2b45ac9da375
	I0629 19:33:31.994337    6596 command_runner.go:130] > 2bebeee868d5
	I0629 19:33:31.994337    6596 command_runner.go:130] > 0870274494db
	I0629 19:33:32.005343    6596 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 19:33:32.050194    6596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 19:33:32.075502    6596 command_runner.go:130] > -rw------- 1 root root 5643 Jun 29 19:20 /etc/kubernetes/admin.conf
	I0629 19:33:32.075563    6596 command_runner.go:130] > -rw------- 1 root root 5652 Jun 29 19:20 /etc/kubernetes/controller-manager.conf
	I0629 19:33:32.075592    6596 command_runner.go:130] > -rw------- 1 root root 2055 Jun 29 19:21 /etc/kubernetes/kubelet.conf
	I0629 19:33:32.075592    6596 command_runner.go:130] > -rw------- 1 root root 5604 Jun 29 19:20 /etc/kubernetes/scheduler.conf
	I0629 19:33:32.075592    6596 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 19:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun 29 19:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2055 Jun 29 19:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun 29 19:20 /etc/kubernetes/scheduler.conf
	
	I0629 19:33:32.084921    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 19:33:32.111789    6596 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0629 19:33:32.122400    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 19:33:32.150578    6596 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0629 19:33:32.160490    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 19:33:32.188499    6596 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:32.198212    6596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 19:33:32.234858    6596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 19:33:32.266820    6596 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 19:33:32.275777    6596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 19:33:32.314469    6596 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 19:33:32.339650    6596 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 19:33:32.339650    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:32.429738    6596 command_runner.go:130] ! W0629 19:33:32.429615    1197 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0629 19:33:32.464469    6596 command_runner.go:130] > [certs] Using the existing "sa" key
	I0629 19:33:32.464469    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:32.549056    6596 command_runner.go:130] ! W0629 19:33:32.545226    1209 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0629 19:33:33.572150    6596 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0629 19:33:33.572150    6596 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1076733s)
	I0629 19:33:33.572150    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:33.658772    6596 command_runner.go:130] ! W0629 19:33:33.654999    1223 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:33.979839    6596 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0629 19:33:33.979839    6596 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0629 19:33:33.979839    6596 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0629 19:33:33.979839    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:34.124992    6596 command_runner.go:130] ! W0629 19:33:34.120743    1274 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0629 19:33:34.212332    6596 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0629 19:33:34.212332    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:34.333764    6596 command_runner.go:130] ! W0629 19:33:34.329223    1297 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:34.501017    6596 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0629 19:33:34.501153    6596 api_server.go:51] waiting for apiserver process to appear ...
	I0629 19:33:34.516232    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:35.136573    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:35.634228    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:36.137923    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:36.637226    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:37.138506    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:37.634531    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:38.144986    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:33:38.307571    6596 command_runner.go:130] > 1841
	I0629 19:33:38.307571    6596 api_server.go:71] duration metric: took 3.8065286s to wait for apiserver process to appear ...
	I0629 19:33:38.308128    6596 api_server.go:87] waiting for apiserver healthz status ...
	I0629 19:33:38.308181    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:38.315346    6596 api_server.go:256] stopped: https://127.0.0.1:54819/healthz: Get "https://127.0.0.1:54819/healthz": EOF
	I0629 19:33:38.821601    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:43.831348    6596 api_server.go:256] stopped: https://127.0.0.1:54819/healthz: Get "https://127.0.0.1:54819/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0629 19:33:44.322567    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:44.600737    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:44.600874    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:44.816593    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:44.837496    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:44.837496    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:45.323644    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:45.348473    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:45.348473    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:45.821683    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:45.844403    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 19:33:45.844454    6596 api_server.go:102] status: https://127.0.0.1:54819/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 19:33:46.323922    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:33:46.352270    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 200:
	ok
	I0629 19:33:46.352945    6596 round_trippers.go:463] GET https://127.0.0.1:54819/version
	I0629 19:33:46.352970    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:46.352999    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:46.353024    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:46.376391    6596 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0629 19:33:46.376481    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:46.376481    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:46.376481    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:46.376481    6596 round_trippers.go:580]     Content-Length: 263
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:46 GMT
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Audit-Id: 1dffc048-0e23-48a5-8c86-08a944048159
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:46.376575    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:46.376659    6596 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.2",
	  "gitCommit": "f66044f4361b9f1f96f0053dd46cb7dce5e990a8",
	  "gitTreeState": "clean",
	  "buildDate": "2022-06-15T14:15:38Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0629 19:33:46.376778    6596 api_server.go:140] control plane version: v1.24.2
	I0629 19:33:46.376857    6596 api_server.go:130] duration metric: took 8.068622s to wait for apiserver health ...
	I0629 19:33:46.376857    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:33:46.376857    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:33:46.381353    6596 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0629 19:33:46.397899    6596 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0629 19:33:46.415351    6596 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0629 19:33:46.415351    6596 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0629 19:33:46.415351    6596 command_runner.go:130] > Device: c7h/199d	Inode: 24833       Links: 1
	I0629 19:33:46.415351    6596 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0629 19:33:46.415351    6596 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0629 19:33:46.415351    6596 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0629 19:33:46.415351    6596 command_runner.go:130] > Change: 2022-06-29 17:58:56.673342000 +0000
	I0629 19:33:46.415351    6596 command_runner.go:130] >  Birth: -
	I0629 19:33:46.415351    6596 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0629 19:33:46.416192    6596 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0629 19:33:46.628595    6596 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0629 19:33:52.200618    6596 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0629 19:33:52.200618    6596 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0629 19:33:52.200618    6596 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0629 19:33:52.200618    6596 command_runner.go:130] > daemonset.apps/kindnet configured
	I0629 19:33:52.200618    6596 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.5719856s)
	I0629 19:33:52.201177    6596 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 19:33:52.201392    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:33:52.201392    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:52.201392    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:52.201644    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:52.311058    6596 round_trippers.go:574] Response Status: 200 OK in 109 milliseconds
	I0629 19:33:52.311058    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:52.311146    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:52.311146    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:52.311199    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:52.311199    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:52 GMT
	I0629 19:33:52.311199    6596 round_trippers.go:580]     Audit-Id: 87402234-e39b-4a34-bc6a-da76ab5ea9fc
	I0629 19:33:52.311199    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:52.318183    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1172"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1129","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 85061 chars]
	I0629 19:33:52.324356    6596 system_pods.go:59] 12 kube-system pods found
	I0629 19:33:52.324356    6596 system_pods.go:61] "coredns-6d4b75cb6d-6vjv2" [957527e4-431b-450f-b20f-ead3b2989f97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 19:33:52.324356    6596 system_pods.go:61] "etcd-multinode-20220629191914-2408" [afa29b2e-ffc8-4567-bc07-a20bcc1715c9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0629 19:33:52.324356    6596 system_pods.go:61] "kindnet-b7v2g" [9febc0b9-2af4-478d-acca-bb892672edc1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0629 19:33:52.324356    6596 system_pods.go:61] "kindnet-q54ld" [db15743e-e6f4-41c8-b655-898eb39adcc6] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kindnet-wbwzc" [dbc2ed3b-1dbe-446b-b485-85f5ff911200] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-apiserver-multinode-20220629191914-2408" [304971a1-1934-418a-997d-b648ac8c4540] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-controller-manager-multinode-20220629191914-2408" [72c39e43-772d-46ed-9bea-9be30695e2cf] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-proxy-2mz9l" [0e6449b8-a82c-4e7f-a4a8-a595b07382f3] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-proxy-5djlc" [734589bd-4941-4bad-bf82-8782fba95fb0] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-proxy-bccdh" [a949d16f-893b-4f7a-969c-45249a4800e7] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "kube-scheduler-multinode-20220629191914-2408" [480afc74-9ecd-4957-a8c1-00d3589ebe52] Running
	I0629 19:33:52.324356    6596 system_pods.go:61] "storage-provisioner" [ad5ec42d-16a3-429c-a3d7-c08eeb03dcae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 19:33:52.324356    6596 system_pods.go:74] duration metric: took 123.1783ms to wait for pod list to return data ...
	I0629 19:33:52.324929    6596 node_conditions.go:102] verifying NodePressure condition ...
	I0629 19:33:52.324929    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes
	I0629 19:33:52.325053    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:52.325053    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:52.325053    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:52.419663    6596 round_trippers.go:574] Response Status: 200 OK in 94 milliseconds
	I0629 19:33:52.419663    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:52.419663    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:52.419663    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:52 GMT
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Audit-Id: 869ec1e5-355a-41ab-865a-f8ecb19742a5
	I0629 19:33:52.419663    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:52.420635    6596 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1173"},"items":[{"metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-ma
naged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","ope [truncated 16112 chars]
	I0629 19:33:52.422044    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:33:52.422507    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:33:52.422507    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:33:52.422507    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:33:52.422507    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:33:52.422507    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:33:52.422507    6596 node_conditions.go:105] duration metric: took 97.5778ms to run NodePressure ...
	I0629 19:33:52.422601    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 19:33:53.023460    6596 command_runner.go:130] ! W0629 19:33:53.018766    2898 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:33:53.631086    6596 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0629 19:33:53.631086    6596 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0629 19:33:53.631086    6596 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2084601s)
	I0629 19:33:53.631086    6596 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 19:33:53.631086    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0629 19:33:53.631086    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:53.631086    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:53.631086    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:53.640734    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:53.640734    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:53.640734    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:53.640734    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:53 GMT
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Audit-Id: 2d7f9ee9-0a2e-458e-ad0f-f0f74cb2069d
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:53.640734    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:53.641556    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1185"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30242 chars]
	I0629 19:33:53.643120    6596 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0629 19:33:53.919817    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0629 19:33:53.919817    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:53.919817    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:53.919817    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:53.930541    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:53.930541    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:53.930541    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:53.930541    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:53.931073    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:53.931073    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:53 GMT
	I0629 19:33:53.931073    6596 round_trippers.go:580]     Audit-Id: d6e45913-16b2-4d17-a38c-7702c7ae70f1
	I0629 19:33:53.931073    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:53.931230    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1186"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30242 chars]
	I0629 19:33:53.933085    6596 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0629 19:33:54.484616    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0629 19:33:54.484715    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:54.484715    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:54.484715    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:54.504808    6596 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0629 19:33:54.504905    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:54.504971    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:54.505024    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:54 GMT
	I0629 19:33:54.505024    6596 round_trippers.go:580]     Audit-Id: 355cd416-fee8-47c1-bb9e-4c6f61335a6c
	I0629 19:33:54.505024    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:54.505099    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:54.505130    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:54.505662    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1192"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30242 chars]
	I0629 19:33:54.507323    6596 retry.go:31] will retry after 655.06503ms: kubelet not initialised
	I0629 19:33:55.163153    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0629 19:33:55.163153    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.163153    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.163153    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.172498    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:55.172523    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.172523    6596 round_trippers.go:580]     Audit-Id: f768d4a3-904f-4d8f-86d5-c6e0a217240b
	I0629 19:33:55.172583    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.172603    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.172603    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.172603    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.172646    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.173453    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 31162 chars]
	I0629 19:33:55.175885    6596 kubeadm.go:777] kubelet initialised
	I0629 19:33:55.175917    6596 kubeadm.go:778] duration metric: took 1.5448204s waiting for restarted kubelet to initialise ...
	I0629 19:33:55.175917    6596 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:33:55.176095    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:33:55.176095    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.176095    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.176095    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.189033    6596 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0629 19:33:55.189033    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Audit-Id: 1d28989b-ccea-477a-91d8-95a6d568e580
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.189033    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.189033    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.189033    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.193031    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 85062 chars]
	I0629 19:33:55.196959    6596 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.196959    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-6vjv2
	I0629 19:33:55.196959    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.196959    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.196959    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.203566    6596 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0629 19:33:55.203566    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.203566    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.203566    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Audit-Id: 8c82db42-9e77-43cc-9591-7fefe81de8d7
	I0629 19:33:55.203566    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.203566    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f
:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f: [truncated 6191 chars]
	I0629 19:33:55.204321    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.204321    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.204321    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.204321    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.214481    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:55.214533    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.214533    6596 round_trippers.go:580]     Audit-Id: e300df70-a2fa-46c4-97e6-f7c88887318a
	I0629 19:33:55.214533    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.214569    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.214569    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.214569    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.214606    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.214702    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.215251    6596 pod_ready.go:92] pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:55.215251    6596 pod_ready.go:81] duration metric: took 18.2917ms waiting for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.215251    6596 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.215417    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/etcd-multinode-20220629191914-2408
	I0629 19:33:55.215504    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.215504    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.215504    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.223522    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:55.223859    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.223859    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.223859    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.223913    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.223913    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.223913    6596 round_trippers.go:580]     Audit-Id: c88276f7-be11-47e1-8625-d9251c2ca59e
	I0629 19:33:55.223913    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.223913    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/ [truncated 6048 chars]
	I0629 19:33:55.224567    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.224595    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.224595    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.224653    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.232107    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:55.232107    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.232107    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Audit-Id: adc7c30b-4ec5-4f6b-9da4-e233a579c604
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.232107    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.232107    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.232824    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.232824    6596 pod_ready.go:92] pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:55.232824    6596 pod_ready.go:81] duration metric: took 17.4734ms waiting for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.232824    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.232824    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220629191914-2408
	I0629 19:33:55.232824    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.232824    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.232824    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.238821    6596 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0629 19:33:55.238821    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Audit-Id: a61f59ad-ddcf-4610-b4cb-6736bb9486e4
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.238821    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.238821    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.238821    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.238821    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220629191914-2408","namespace":"kube-system","uid":"304971a1-1934-418a-997d-b648ac8c4540","resourceVersion":"1178","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.mirror":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.seen":"2022-06-29T19:21:09.098334300Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","
fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{ [truncated 8515 chars]
	I0629 19:33:55.238821    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.238821    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.238821    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.238821    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.254030    6596 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0629 19:33:55.254030    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.254558    6596 round_trippers.go:580]     Audit-Id: 86ce1703-cedf-4a84-b2f4-49ed5bd60494
	I0629 19:33:55.254558    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.254558    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.254655    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.254655    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.254655    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.254655    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.255911    6596 pod_ready.go:92] pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:55.255911    6596 pod_ready.go:81] duration metric: took 23.0867ms waiting for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.255911    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:55.255911    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:55.255911    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.255911    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.255911    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.268841    6596 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0629 19:33:55.268841    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.268841    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.268841    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Audit-Id: 8da6ba79-0654-45f7-87ae-556404147c9f
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.268841    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.268841    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1196","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8350 chars]
	I0629 19:33:55.269838    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.269838    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.269838    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.269838    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.276849    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:55.276849    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.276849    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.276849    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Audit-Id: 9a65f286-cf4f-4743-8cfd-5bb4c0fd8153
	I0629 19:33:55.276849    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.277846    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:55.781456    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:55.781456    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.781456    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.781456    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.791164    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:55.791164    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Audit-Id: f3a8e9c7-b6d1-4436-be44-7bf09c5795c6
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.791164    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.791164    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.791164    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.791164    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1196","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8350 chars]
	I0629 19:33:55.792549    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:55.792549    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:55.792549    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:55.792549    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:55.803056    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:55.803056    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:55.803056    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:55.803056    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:55 GMT
	I0629 19:33:55.803056    6596 round_trippers.go:580]     Audit-Id: e85dca24-7398-4575-8fb1-640077e65acb
	I0629 19:33:55.803527    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:56.288729    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:56.288729    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.288811    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.288811    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.298986    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:56.299024    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.299024    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.299179    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.299179    6596 round_trippers.go:580]     Audit-Id: eab87892-4024-4496-9664-4ba4755e61af
	I0629 19:33:56.299234    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.299234    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.299234    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.299474    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1208","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8088 chars]
	I0629 19:33:56.300061    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:56.300116    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.300116    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.300176    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.308918    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:56.308918    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.308918    6596 round_trippers.go:580]     Audit-Id: 4d1bd454-af19-4abb-ac74-8ec090a92bae
	I0629 19:33:56.308918    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.308918    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.309912    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.309912    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.309912    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.309912    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:56.310617    6596 pod_ready.go:92] pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:56.310777    6596 pod_ready.go:81] duration metric: took 1.0548598s waiting for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.310777    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.310928    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-2mz9l
	I0629 19:33:56.310928    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.310928    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.311017    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.319997    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:56.319997    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.319997    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.319997    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Audit-Id: 226fd44e-1cdd-4cab-9b15-ceb3d570f776
	I0629 19:33:56.319997    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.319997    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2mz9l","generateName":"kube-proxy-","namespace":"kube-system","uid":"0e6449b8-a82c-4e7f-a4a8-a595b07382f3","resourceVersion":"538","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5547 chars]
	I0629 19:33:56.369206    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:33:56.369206    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.369206    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.369206    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.376518    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:56.376627    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.376627    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.376627    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.376627    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.376627    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.376693    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.376693    6596 round_trippers.go:580]     Audit-Id: 2443a0bc-cdca-4087-b87d-cf626931d73a
	I0629 19:33:56.376848    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m02","uid":"aaf41655-3991-4e63-82df-36b045e3e43c","resourceVersion":"920","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 4539 chars]
	I0629 19:33:56.376848    6596 pod_ready.go:92] pod "kube-proxy-2mz9l" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:56.376848    6596 pod_ready.go:81] duration metric: took 66.07ms waiting for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.376848    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.569121    6596 request.go:533] Waited for 192.0968ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:33:56.569437    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:33:56.569437    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.569437    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.569437    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.577363    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:56.577403    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.577434    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.577434    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Audit-Id: 6bb7d709-ce60-4e3f-a257-c4c8cd36a835
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.577469    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.577654    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5djlc","generateName":"kube-proxy-","namespace":"kube-system","uid":"734589bd-4941-4bad-bf82-8782fba95fb0","resourceVersion":"1169","creationTimestamp":"2022-06-29T19:21:20Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5745 chars]
	I0629 19:33:56.771207    6596 request.go:533] Waited for 192.7675ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:56.771294    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:56.771294    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.771294    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.771294    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.781423    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:56.781480    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.781514    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.781514    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Audit-Id: 6f3577c3-f86a-4e9a-81b5-8d2c65e49103
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.781544    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.781544    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:56.782120    6596 pod_ready.go:92] pod "kube-proxy-5djlc" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:56.782120    6596 pod_ready.go:81] duration metric: took 405.2696ms waiting for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.782120    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:56.965065    6596 request.go:533] Waited for 182.6394ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:33:56.965158    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:33:56.965158    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:56.965158    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:56.965392    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:56.972835    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:56.972835    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:56.972835    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:56.972835    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:56 GMT
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Audit-Id: 4dd77c45-3d97-4b32-856b-639dce66bdb3
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:56.972835    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:56.972835    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bccdh","generateName":"kube-proxy-","namespace":"kube-system","uid":"a949d16f-893b-4f7a-969c-45249a4800e7","resourceVersion":"1100","creationTimestamp":"2022-06-29T19:26:11Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:26:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5753 chars]
	I0629 19:33:57.171134    6596 request.go:533] Waited for 197.0902ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:33:57.171917    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:33:57.171917    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.171917    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.171986    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.180086    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:57.180109    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Audit-Id: 4c051343-83c1-4192-857e-bd95d011bbbf
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.180109    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.180109    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.180109    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.180109    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m03","uid":"a730aee4-fd4f-4ea7-9eba-d4268a85cdf0","resourceVersion":"1086","creationTimestamp":"2022-06-29T19:31:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"202
2-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f [truncated 4211 chars]
	I0629 19:33:57.180631    6596 pod_ready.go:92] pod "kube-proxy-bccdh" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:57.180764    6596 pod_ready.go:81] duration metric: took 398.6413ms waiting for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:57.180796    6596 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:57.373880    6596 request.go:533] Waited for 192.854ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:33:57.373970    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:33:57.373970    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.373970    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.373970    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.381510    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:57.381574    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Audit-Id: a8a5fc86-81ac-44bb-b1f9-8cd3adca30c1
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.381602    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.381602    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.381602    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.381602    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220629191914-2408","namespace":"kube-system","uid":"480afc74-9ecd-4957-a8c1-00d3589ebe52","resourceVersion":"1202","creationTimestamp":"2022-06-29T19:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.mirror":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.seen":"2022-06-29T19:20:50.548921500Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes
.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io [truncated 4972 chars]
	I0629 19:33:57.577489    6596 request.go:533] Waited for 194.8206ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:57.577489    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:57.577489    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.577489    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.577489    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.604690    6596 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0629 19:33:57.605167    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.605302    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.605393    6596 round_trippers.go:580]     Audit-Id: 06f4a5c7-4952-49dc-9c66-a1e27228920c
	I0629 19:33:57.605393    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.605393    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.605393    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.605393    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.605393    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:57.606371    6596 pod_ready.go:92] pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:57.606371    6596 pod_ready.go:81] duration metric: took 425.5385ms waiting for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:57.606371    6596 pod_ready.go:38] duration metric: took 2.4304377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:33:57.606492    6596 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 19:33:57.810186    6596 command_runner.go:130] > -16
	I0629 19:33:57.810186    6596 ops.go:34] apiserver oom_adj: -16
	I0629 19:33:57.810186    6596 kubeadm.go:630] restartCluster took 30.3184901s
	I0629 19:33:57.810186    6596 kubeadm.go:397] StartCluster complete in 30.4325536s
	I0629 19:33:57.810744    6596 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 19:33:57.811029    6596 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:57.812553    6596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 19:33:57.824438    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:57.825483    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\ca.crt", CertData:[]u
int8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:33:57.826692    6596 cert_rotation.go:137] Starting client certificate rotation controller
	I0629 19:33:57.826692    6596 round_trippers.go:463] GET https://127.0.0.1:54819/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0629 19:33:57.826692    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:57.826692    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:57.826692    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:57.849810    6596 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0629 19:33:57.849810    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:57.849810    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Content-Length: 292
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:57 GMT
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Audit-Id: f8ce193f-964e-49b8-9d1e-103e3e669926
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:57.849810    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:57.849810    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:57.849810    6596 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e3b60944-576d-4023-b66a-3fdcbedd3a25","resourceVersion":"1184","creationTimestamp":"2022-06-29T19:21:08Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0629 19:33:57.849810    6596 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220629191914-2408" rescaled to 1
	I0629 19:33:57.850849    6596 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 19:33:57.855864    6596 out.go:177] * Verifying Kubernetes components...
	I0629 19:33:57.850849    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 19:33:57.850849    6596 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0629 19:33:57.850849    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:33:57.858842    6596 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220629191914-2408"
	I0629 19:33:57.858842    6596 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220629191914-2408"
	W0629 19:33:57.858842    6596 addons.go:162] addon storage-provisioner should already be in state true
	I0629 19:33:57.858842    6596 addons.go:65] Setting default-storageclass=true in profile "multinode-20220629191914-2408"
	I0629 19:33:57.858842    6596 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220629191914-2408"
	I0629 19:33:57.858842    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:33:57.868852    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 19:33:57.876799    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:57.877803    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:58.039351    6596 command_runner.go:130] > apiVersion: v1
	I0629 19:33:58.039351    6596 command_runner.go:130] > data:
	I0629 19:33:58.039351    6596 command_runner.go:130] >   Corefile: |
	I0629 19:33:58.039351    6596 command_runner.go:130] >     .:53 {
	I0629 19:33:58.039351    6596 command_runner.go:130] >         errors
	I0629 19:33:58.039351    6596 command_runner.go:130] >         health {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            lameduck 5s
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         ready
	I0629 19:33:58.039351    6596 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            pods insecure
	I0629 19:33:58.039351    6596 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0629 19:33:58.039351    6596 command_runner.go:130] >            ttl 30
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         prometheus :9153
	I0629 19:33:58.039351    6596 command_runner.go:130] >         hosts {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0629 19:33:58.039351    6596 command_runner.go:130] >            fallthrough
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0629 19:33:58.039351    6596 command_runner.go:130] >            max_concurrent 1000
	I0629 19:33:58.039351    6596 command_runner.go:130] >         }
	I0629 19:33:58.039351    6596 command_runner.go:130] >         cache 30
	I0629 19:33:58.039351    6596 command_runner.go:130] >         loop
	I0629 19:33:58.039351    6596 command_runner.go:130] >         reload
	I0629 19:33:58.039351    6596 command_runner.go:130] >         loadbalance
	I0629 19:33:58.039351    6596 command_runner.go:130] >     }
	I0629 19:33:58.039351    6596 command_runner.go:130] > kind: ConfigMap
	I0629 19:33:58.039351    6596 command_runner.go:130] > metadata:
	I0629 19:33:58.039351    6596 command_runner.go:130] >   creationTimestamp: "2022-06-29T19:21:08Z"
	I0629 19:33:58.039351    6596 command_runner.go:130] >   name: coredns
	I0629 19:33:58.039351    6596 command_runner.go:130] >   namespace: kube-system
	I0629 19:33:58.039351    6596 command_runner.go:130] >   resourceVersion: "383"
	I0629 19:33:58.039351    6596 command_runner.go:130] >   uid: fad4f9c4-c0ea-4ac6-ab7a-2148242c8a5e
	I0629 19:33:58.039351    6596 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0629 19:33:58.052321    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:59.019299    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.141488s)
	I0629 19:33:59.020299    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:33:59.020299    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\ca.crt", CertData:[]u
int8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:33:59.021307    6596 round_trippers.go:463] GET https://127.0.0.1:54819/apis/storage.k8s.io/v1/storageclasses
	I0629 19:33:59.021307    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.021307    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.021307    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.031297    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.15449s)
	I0629 19:33:59.034298    6596 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 19:33:59.036911    6596 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 19:33:59.036911    6596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 19:33:59.046478    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:33:59.108569    6596 round_trippers.go:574] Response Status: 200 OK in 87 milliseconds
	I0629 19:33:59.108651    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.108691    6596 round_trippers.go:580]     Audit-Id: 4fd23ea0-bc9c-413a-b3ad-b96733f71102
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.108729    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.108729    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Content-Length: 1274
	I0629 19:33:59.108729    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.108868    6596 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"standard","uid":"c9d8c037-c78f-4b3b-b4b1-ffbf158fdff0","resourceVersion":"396","creationTimestamp":"2022-06-29T19:21:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-06-29T19:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0629 19:33:59.109973    6596 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c9d8c037-c78f-4b3b-b4b1-ffbf158fdff0","resourceVersion":"396","creationTimestamp":"2022-06-29T19:21:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-06-29T19:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0629 19:33:59.110109    6596 round_trippers.go:463] PUT https://127.0.0.1:54819/apis/storage.k8s.io/v1/storageclasses/standard
	I0629 19:33:59.110139    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.110139    6596 round_trippers.go:473]     Content-Type: application/json
	I0629 19:33:59.110139    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.110139    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.200642    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1483129s)
	I0629 19:33:59.200642    6596 node_ready.go:35] waiting up to 6m0s for node "multinode-20220629191914-2408" to be "Ready" ...
	I0629 19:33:59.201741    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.201741    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.201741    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.201741    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.203633    6596 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0629 19:33:59.204639    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.204639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.204639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Content-Length: 1220
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.204639    6596 round_trippers.go:580]     Audit-Id: a25940ec-95f1-4f11-b982-6e723a076e49
	I0629 19:33:59.204639    6596 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c9d8c037-c78f-4b3b-b4b1-ffbf158fdff0","resourceVersion":"396","creationTimestamp":"2022-06-29T19:21:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-06-29T19:21:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0629 19:33:59.205659    6596 addons.go:153] Setting addon default-storageclass=true in "multinode-20220629191914-2408"
	W0629 19:33:59.205659    6596 addons.go:162] addon default-storageclass should already be in state true
	I0629 19:33:59.206642    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:33:59.209634    6596 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0629 19:33:59.209634    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.209634    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.209634    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Audit-Id: 1cb711a1-c7d9-48c4-838c-1220a40c9ec9
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.209634    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.209634    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.210639    6596 node_ready.go:49] node "multinode-20220629191914-2408" has status "Ready":"True"
	I0629 19:33:59.210639    6596 node_ready.go:38] duration metric: took 9.9971ms waiting for node "multinode-20220629191914-2408" to be "Ready" ...
	I0629 19:33:59.210639    6596 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:33:59.210639    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:33:59.210639    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.210639    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.210639    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.231613    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:33:59.312061    6596 round_trippers.go:574] Response Status: 200 OK in 101 milliseconds
	I0629 19:33:59.312061    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.312203    6596 round_trippers.go:580]     Audit-Id: 731bf19e-12f4-4803-85e9-533a284946d0
	I0629 19:33:59.312203    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.312325    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.312395    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.312395    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.312484    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.320057    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1218"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 84556 chars]
	I0629 19:33:59.325916    6596 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.325916    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-6vjv2
	I0629 19:33:59.325916    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.325916    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.325916    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.403223    6596 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I0629 19:33:59.403344    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.403417    6596 round_trippers.go:580]     Audit-Id: 2b392e97-fc47-482e-981d-232f775c95e1
	I0629 19:33:59.403417    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.403417    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.403417    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.403555    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.403555    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.403812    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f
:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f: [truncated 6191 chars]
	I0629 19:33:59.404888    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.404888    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.405212    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.405259    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.421532    6596 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0629 19:33:59.421590    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.421590    6596 round_trippers.go:580]     Audit-Id: 9bce5c77-9bad-4505-a460-4a1c6057766f
	I0629 19:33:59.421590    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.421693    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.421693    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.421741    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.421741    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.422066    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.422677    6596 pod_ready.go:92] pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.422677    6596 pod_ready.go:81] duration metric: took 96.7602ms waiting for pod "coredns-6d4b75cb6d-6vjv2" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.422677    6596 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.422677    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/etcd-multinode-20220629191914-2408
	I0629 19:33:59.422677    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.422677    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.422677    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.431562    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:33:59.431562    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Audit-Id: 8d326192-5555-47d7-8ca2-eaab7c6d16e7
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.431562    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.431562    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.431562    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.431562    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220629191914-2408","namespace":"kube-system","uid":"afa29b2e-ffc8-4567-bc07-a20bcc1715c9","resourceVersion":"1173","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.mirror":"69a054761cbc8db35015679d4e3cadaf","kubernetes.io/config.seen":"2022-06-29T19:21:09.098212600Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/ [truncated 6048 chars]
	I0629 19:33:59.433348    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.433411    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.433411    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.433466    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.501848    6596 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I0629 19:33:59.501848    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.501848    6596 round_trippers.go:580]     Audit-Id: f6222e1d-c339-4611-b75b-fed5942ae3e5
	I0629 19:33:59.501848    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.501848    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.501848    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.502038    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.502062    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.502246    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.502768    6596 pod_ready.go:92] pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.502768    6596 pod_ready.go:81] duration metric: took 80.091ms waiting for pod "etcd-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.502768    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.502933    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220629191914-2408
	I0629 19:33:59.502933    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.502933    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.502933    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.518482    6596 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0629 19:33:59.518600    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.518600    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.518600    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.518600    6596 round_trippers.go:580]     Audit-Id: d1ef75dc-25ba-459e-aa61-1f5b6a88aedf
	I0629 19:33:59.519171    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220629191914-2408","namespace":"kube-system","uid":"304971a1-1934-418a-997d-b648ac8c4540","resourceVersion":"1178","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.mirror":"9c7eac304a910f4e89eb5c9093788bc9","kubernetes.io/config.seen":"2022-06-29T19:21:09.098334300Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","
fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{ [truncated 8515 chars]
	I0629 19:33:59.519744    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.519744    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.519744    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.519744    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.538368    6596 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0629 19:33:59.538461    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.538506    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Audit-Id: 581540b0-5582-4d36-b7e7-69351ffe5fcd
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.538547    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.538547    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.538923    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.539249    6596 pod_ready.go:92] pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.539249    6596 pod_ready.go:81] duration metric: took 36.3534ms waiting for pod "kube-apiserver-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.539249    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.539793    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220629191914-2408
	I0629 19:33:59.539793    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.539874    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.539874    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.603846    6596 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0629 19:33:59.603936    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.603936    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.603936    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.603936    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.604019    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.604019    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.604019    6596 round_trippers.go:580]     Audit-Id: 8863cf8c-3aec-427c-84c4-45c95fabcb4d
	I0629 19:33:59.604313    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220629191914-2408","namespace":"kube-system","uid":"72c39e43-772d-46ed-9bea-9be30695e2cf","resourceVersion":"1208","creationTimestamp":"2022-06-29T19:21:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.mirror":"8b9be432223649b5ea346a7cb37468ab","kubernetes.io/config.seen":"2022-06-29T19:21:09.098340400Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".
":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{ [truncated 8088 chars]
	I0629 19:33:59.605086    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:33:59.605086    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.605086    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.605086    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.614639    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:33:59.614639    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.614639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.614639    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.614639    6596 round_trippers.go:580]     Audit-Id: 8fcf9fb0-81f4-412f-9af4-b796f4983146
	I0629 19:33:59.614639    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:33:59.615835    6596 pod_ready.go:92] pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:33:59.615835    6596 pod_ready.go:81] duration metric: took 76.5854ms waiting for pod "kube-controller-manager-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.615835    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:33:59.769690    6596 request.go:533] Waited for 153.7722ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-2mz9l
	I0629 19:33:59.769955    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-2mz9l
	I0629 19:33:59.769955    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.769955    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:33:59.769955    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.780024    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:33:59.780090    6596 round_trippers.go:577] Response Headers:
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Audit-Id: d6e08b0e-305f-4d19-8b55-eb8c430f893b
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:33:59.780090    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:33:59.780090    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:33:59.780090    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:33:59 GMT
	I0629 19:33:59.780353    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2mz9l","generateName":"kube-proxy-","namespace":"kube-system","uid":"0e6449b8-a82c-4e7f-a4a8-a595b07382f3","resourceVersion":"538","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5547 chars]
	I0629 19:33:59.975489    6596 request.go:533] Waited for 194.4153ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:33:59.975907    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:33:59.975961    6596 round_trippers.go:469] Request Headers:
	I0629 19:33:59.975961    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:33:59.975961    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.006409    6596 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0629 19:34:00.006524    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Audit-Id: c93e81dc-5713-470b-9c38-9e1875fb1880
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.006524    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.006524    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.006524    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.006977    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m02","uid":"aaf41655-3991-4e63-82df-36b045e3e43c","resourceVersion":"920","creationTimestamp":"2022-06-29T19:23:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:23:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 4539 chars]
	I0629 19:34:00.007556    6596 pod_ready.go:92] pod "kube-proxy-2mz9l" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:00.007556    6596 pod_ready.go:81] duration metric: took 391.7185ms waiting for pod "kube-proxy-2mz9l" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.007556    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.178587    6596 request.go:533] Waited for 170.3891ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:34:00.178587    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-5djlc
	I0629 19:34:00.178587    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.178587    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.178587    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.193015    6596 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0629 19:34:00.193142    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.193235    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.193333    6596 round_trippers.go:580]     Audit-Id: 320719bb-8c63-49a0-b064-5753700e1437
	I0629 19:34:00.193425    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.193425    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.193425    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.193425    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.196144    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5djlc","generateName":"kube-proxy-","namespace":"kube-system","uid":"734589bd-4941-4bad-bf82-8782fba95fb0","resourceVersion":"1169","creationTimestamp":"2022-06-29T19:21:20Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5745 chars]
	I0629 19:34:00.285752    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.239266s)
	I0629 19:34:00.285884    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:00.373321    6596 request.go:533] Waited for 176.7ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:00.373495    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:00.373495    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.373548    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.373745    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.383233    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:34:00.383233    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.383233    6596 round_trippers.go:580]     Audit-Id: 41e560cb-4f78-4573-8b4c-c97f664b48fd
	I0629 19:34:00.383233    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.383233    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.383233    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.384250    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.384250    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.384250    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:34:00.384250    6596 pod_ready.go:92] pod "kube-proxy-5djlc" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:00.384250    6596 pod_ready.go:81] duration metric: took 376.6918ms waiting for pod "kube-proxy-5djlc" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.384250    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.409248    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (1.1776263s)
	I0629 19:34:00.409248    6596 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 19:34:00.409248    6596 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 19:34:00.416234    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:34:00.461491    6596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 19:34:00.567132    6596 request.go:533] Waited for 182.8159ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:34:00.567389    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-proxy-bccdh
	I0629 19:34:00.567389    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.567389    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.567389    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.601588    6596 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0629 19:34:00.601588    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.601588    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.601588    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.601588    6596 round_trippers.go:580]     Audit-Id: b0542b61-3e6c-44fa-a360-8272d090f84e
	I0629 19:34:00.601588    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bccdh","generateName":"kube-proxy-","namespace":"kube-system","uid":"a949d16f-893b-4f7a-969c-45249a4800e7","resourceVersion":"1100","creationTimestamp":"2022-06-29T19:26:11Z","labels":{"controller-revision-hash":"5c599f896","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"72ed7b02-8d7b-4829-88da-284ac3420400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:26:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72ed7b02-8d7b-4829-88da-284ac3420400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5753 chars]
	I0629 19:34:00.766356    6596 request.go:533] Waited for 163.4063ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:34:00.766434    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m03
	I0629 19:34:00.766434    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.766434    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.766552    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.774597    6596 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0629 19:34:00.774647    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Audit-Id: 7155faef-2425-4606-867e-e4adf1d0c736
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.774647    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.774647    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.774647    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.775205    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408-m03","uid":"a730aee4-fd4f-4ea7-9eba-d4268a85cdf0","resourceVersion":"1086","creationTimestamp":"2022-06-29T19:31:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"202
2-06-29T19:31:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f [truncated 4211 chars]
	I0629 19:34:00.775642    6596 pod_ready.go:92] pod "kube-proxy-bccdh" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:00.775716    6596 pod_ready.go:81] duration metric: took 391.4247ms waiting for pod "kube-proxy-bccdh" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.775716    6596 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:00.819055    6596 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0629 19:34:00.819055    6596 command_runner.go:130] > pod/storage-provisioner configured
	I0629 19:34:00.964497    6596 request.go:533] Waited for 188.7164ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:34:00.964497    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220629191914-2408
	I0629 19:34:00.964497    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:00.964604    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:00.964604    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:00.974561    6596 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0629 19:34:00.974604    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:00.974604    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:00 GMT
	I0629 19:34:00.974604    6596 round_trippers.go:580]     Audit-Id: ed993975-0657-4a09-b0af-a666cc59c402
	I0629 19:34:00.974660    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:00.974660    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:00.974660    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:00.974660    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:00.974936    6596 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220629191914-2408","namespace":"kube-system","uid":"480afc74-9ecd-4957-a8c1-00d3589ebe52","resourceVersion":"1202","creationTimestamp":"2022-06-29T19:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.mirror":"46818e0bdbd624033ed546f4243f4257","kubernetes.io/config.seen":"2022-06-29T19:20:50.548921500Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes
.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io [truncated 4972 chars]
	I0629 19:34:01.169314    6596 request.go:533] Waited for 194.025ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:01.169466    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408
	I0629 19:34:01.169659    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.169659    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.169659    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.184289    6596 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0629 19:34:01.184349    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.184349    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.184349    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.184454    6596 round_trippers.go:580]     Audit-Id: 96a53adf-5fc7-4bd1-a935-0c67d8cda63d
	I0629 19:34:01.184454    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.184454    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.184569    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.185020    6596 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"mana
ger":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-2 [truncated 5324 chars]
	I0629 19:34:01.185822    6596 pod_ready.go:92] pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace has status "Ready":"True"
	I0629 19:34:01.185920    6596 pod_ready.go:81] duration metric: took 410.2016ms waiting for pod "kube-scheduler-multinode-20220629191914-2408" in "kube-system" namespace to be "Ready" ...
	I0629 19:34:01.185965    6596 pod_ready.go:38] duration metric: took 1.9753124s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 19:34:01.185998    6596 api_server.go:51] waiting for apiserver process to appear ...
	I0629 19:34:01.197317    6596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:34:01.237291    6596 command_runner.go:130] > 1841
	I0629 19:34:01.237291    6596 api_server.go:71] duration metric: took 3.3864192s to wait for apiserver process to appear ...
	I0629 19:34:01.237291    6596 api_server.go:87] waiting for apiserver healthz status ...
	I0629 19:34:01.237291    6596 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54819/healthz ...
	I0629 19:34:01.261467    6596 api_server.go:266] https://127.0.0.1:54819/healthz returned 200:
	ok
	I0629 19:34:01.261467    6596 round_trippers.go:463] GET https://127.0.0.1:54819/version
	I0629 19:34:01.261467    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.261467    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.261467    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.266427    6596 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0629 19:34:01.266472    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.266472    6596 round_trippers.go:580]     Content-Length: 263
	I0629 19:34:01.266531    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.266531    6596 round_trippers.go:580]     Audit-Id: 369fa7a3-5f35-40f5-9054-1f498aeab8cc
	I0629 19:34:01.266531    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.266581    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.266581    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.266581    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.266581    6596 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.2",
	  "gitCommit": "f66044f4361b9f1f96f0053dd46cb7dce5e990a8",
	  "gitTreeState": "clean",
	  "buildDate": "2022-06-15T14:15:38Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0629 19:34:01.266693    6596 api_server.go:140] control plane version: v1.24.2
	I0629 19:34:01.266693    6596 api_server.go:130] duration metric: took 29.4023ms to wait for apiserver health ...
	I0629 19:34:01.266693    6596 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 19:34:01.373565    6596 request.go:533] Waited for 106.6408ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.373679    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.373679    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.373679    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.373814    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.387508    6596 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0629 19:34:01.387642    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.387642    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.387727    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.387727    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.387765    6596 round_trippers.go:580]     Audit-Id: da606766-888a-4077-a190-5934142a9ec9
	I0629 19:34:01.387765    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.387796    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.391410    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 84556 chars]
	I0629 19:34:01.394965    6596 system_pods.go:59] 12 kube-system pods found
	I0629 19:34:01.394965    6596 system_pods.go:61] "coredns-6d4b75cb6d-6vjv2" [957527e4-431b-450f-b20f-ead3b2989f97] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "etcd-multinode-20220629191914-2408" [afa29b2e-ffc8-4567-bc07-a20bcc1715c9] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kindnet-b7v2g" [9febc0b9-2af4-478d-acca-bb892672edc1] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kindnet-q54ld" [db15743e-e6f4-41c8-b655-898eb39adcc6] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kindnet-wbwzc" [dbc2ed3b-1dbe-446b-b485-85f5ff911200] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-apiserver-multinode-20220629191914-2408" [304971a1-1934-418a-997d-b648ac8c4540] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-controller-manager-multinode-20220629191914-2408" [72c39e43-772d-46ed-9bea-9be30695e2cf] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-proxy-2mz9l" [0e6449b8-a82c-4e7f-a4a8-a595b07382f3] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-proxy-5djlc" [734589bd-4941-4bad-bf82-8782fba95fb0] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-proxy-bccdh" [a949d16f-893b-4f7a-969c-45249a4800e7] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "kube-scheduler-multinode-20220629191914-2408" [480afc74-9ecd-4957-a8c1-00d3589ebe52] Running
	I0629 19:34:01.394965    6596 system_pods.go:61] "storage-provisioner" [ad5ec42d-16a3-429c-a3d7-c08eeb03dcae] Running
	I0629 19:34:01.394965    6596 system_pods.go:74] duration metric: took 128.271ms to wait for pod list to return data ...
	I0629 19:34:01.394965    6596 default_sa.go:34] waiting for default service account to be created ...
	I0629 19:34:01.543296    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1270542s)
	I0629 19:34:01.543296    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:01.563440    6596 request.go:533] Waited for 168.428ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/default/serviceaccounts
	I0629 19:34:01.563483    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/default/serviceaccounts
	I0629 19:34:01.563483    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.563570    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.563570    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.573761    6596 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0629 19:34:01.573761    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.573761    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.573761    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Content-Length: 262
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Audit-Id: dd93e89b-6f15-4a95-b04c-ff88e064731c
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.573761    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.573761    6596 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4ae379a5-0caa-4b0a-a2f8-eac6048156ef","resourceVersion":"318","creationTimestamp":"2022-06-29T19:21:20Z"}}]}
	I0629 19:34:01.574308    6596 default_sa.go:45] found service account: "default"
	I0629 19:34:01.574308    6596 default_sa.go:55] duration metric: took 179.3416ms for default service account to be created ...
	I0629 19:34:01.574308    6596 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 19:34:01.712705    6596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 19:34:01.776211    6596 request.go:533] Waited for 201.7146ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.776431    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/namespaces/kube-system/pods
	I0629 19:34:01.776483    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:01.776516    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:01.776516    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:01.791318    6596 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0629 19:34:01.791419    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:01.791458    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:01.791744    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:01 GMT
	I0629 19:34:01.791744    6596 round_trippers.go:580]     Audit-Id: d4a0a919-5b90-4958-a170-ceecb655f2a0
	I0629 19:34:01.791807    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:01.794312    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:01.794312    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:01.798546    6596 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-6vjv2","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"957527e4-431b-450f-b20f-ead3b2989f97","resourceVersion":"1189","creationTimestamp":"2022-06-29T19:21:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"a116c2a1-3b73-4187-b568-91f9d2aea979","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-29T19:21:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a116c2a1-3b73-4187-b568-91f9d2aea979\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{
},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{". [truncated 84556 chars]
	I0629 19:34:01.805874    6596 system_pods.go:86] 12 kube-system pods found
	I0629 19:34:01.805874    6596 system_pods.go:89] "coredns-6d4b75cb6d-6vjv2" [957527e4-431b-450f-b20f-ead3b2989f97] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "etcd-multinode-20220629191914-2408" [afa29b2e-ffc8-4567-bc07-a20bcc1715c9] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kindnet-b7v2g" [9febc0b9-2af4-478d-acca-bb892672edc1] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kindnet-q54ld" [db15743e-e6f4-41c8-b655-898eb39adcc6] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kindnet-wbwzc" [dbc2ed3b-1dbe-446b-b485-85f5ff911200] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-apiserver-multinode-20220629191914-2408" [304971a1-1934-418a-997d-b648ac8c4540] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-controller-manager-multinode-20220629191914-2408" [72c39e43-772d-46ed-9bea-9be30695e2cf] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-proxy-2mz9l" [0e6449b8-a82c-4e7f-a4a8-a595b07382f3] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-proxy-5djlc" [734589bd-4941-4bad-bf82-8782fba95fb0] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-proxy-bccdh" [a949d16f-893b-4f7a-969c-45249a4800e7] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "kube-scheduler-multinode-20220629191914-2408" [480afc74-9ecd-4957-a8c1-00d3589ebe52] Running
	I0629 19:34:01.805874    6596 system_pods.go:89] "storage-provisioner" [ad5ec42d-16a3-429c-a3d7-c08eeb03dcae] Running
	I0629 19:34:01.805874    6596 system_pods.go:126] duration metric: took 231.564ms to wait for k8s-apps to be running ...
	I0629 19:34:01.805874    6596 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 19:34:01.815583    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 19:34:02.046789    6596 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0629 19:34:02.046789    6596 system_svc.go:56] duration metric: took 240.9141ms WaitForService to wait for kubelet.
	I0629 19:34:02.046789    6596 kubeadm.go:572] duration metric: took 4.1959122s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 19:34:02.046789    6596 node_conditions.go:102] verifying NodePressure condition ...
	I0629 19:34:02.050352    6596 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0629 19:34:02.046789    6596 round_trippers.go:463] GET https://127.0.0.1:54819/api/v1/nodes
	I0629 19:34:02.053084    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:02.053084    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:02.053084    6596 addons.go:414] enableAddons completed in 4.2022071s
	I0629 19:34:02.053084    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:02.059492    6596 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0629 19:34:02.060183    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:02.060183    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:02.060228    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:02.060228    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:02 GMT
	I0629 19:34:02.060228    6596 round_trippers.go:580]     Audit-Id: d38df85f-1527-4699-8ae2-addff4e986be
	I0629 19:34:02.060266    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:02.060266    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:02.060409    6596 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1221"},"items":[{"metadata":{"name":"multinode-20220629191914-2408","uid":"13c83dd6-8a62-41fb-8677-ccb22213d5ec","resourceVersion":"1127","creationTimestamp":"2022-06-29T19:21:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220629191914-2408","kubernetes.io/os":"linux","minikube.k8s.io/commit":"80ef72c6e06144133907f90b1b2924df52b551ed","minikube.k8s.io/name":"multinode-20220629191914-2408","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_29T19_21_11_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-ma
naged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","ope [truncated 16112 chars]
	I0629 19:34:02.061556    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:34:02.061599    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:34:02.061644    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:34:02.061644    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:34:02.061644    6596 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0629 19:34:02.061644    6596 node_conditions.go:123] node cpu capacity is 16
	I0629 19:34:02.061644    6596 node_conditions.go:105] duration metric: took 14.8547ms to run NodePressure ...
	I0629 19:34:02.061697    6596 start.go:213] waiting for startup goroutines ...
	I0629 19:34:02.072702    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:34:02.072702    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:34:02.086372    6596 out.go:177] * Starting worker node multinode-20220629191914-2408-m02 in cluster multinode-20220629191914-2408
	I0629 19:34:02.088608    6596 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 19:34:02.091071    6596 out.go:177] * Pulling base image ...
	I0629 19:34:02.094150    6596 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 19:34:02.094228    6596 cache.go:57] Caching tarball of preloaded images
	I0629 19:34:02.094298    6596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 19:34:02.094482    6596 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 19:34:02.094899    6596 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 19:34:02.095177    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:34:03.181150    6596 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 19:34:03.181182    6596 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 19:34:03.181226    6596 cache.go:208] Successfully downloaded all kic artifacts
	I0629 19:34:03.181328    6596 start.go:352] acquiring machines lock for multinode-20220629191914-2408-m02: {Name:mka48302875babb74b783eb09491576883a88fd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 19:34:03.181546    6596 start.go:356] acquired machines lock for "multinode-20220629191914-2408-m02" in 181.5µs
	I0629 19:34:03.181679    6596 start.go:94] Skipping create...Using existing machine configuration
	I0629 19:34:03.181679    6596 fix.go:55] fixHost starting: m02
	I0629 19:34:03.196535    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}
	I0629 19:34:04.314089    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}: (1.1175458s)
	I0629 19:34:04.314089    6596 fix.go:103] recreateIfNeeded on multinode-20220629191914-2408-m02: state=Stopped err=<nil>
	W0629 19:34:04.314089    6596 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 19:34:04.317010    6596 out.go:177] * Restarting existing docker container for "multinode-20220629191914-2408-m02" ...
	I0629 19:34:04.327011    6596 cli_runner.go:164] Run: docker start multinode-20220629191914-2408-m02
	I0629 19:34:06.341392    6596 cli_runner.go:217] Completed: docker start multinode-20220629191914-2408-m02: (2.0143677s)
	I0629 19:34:06.354177    6596 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}
	I0629 19:34:07.497047    6596 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}: (1.1428624s)
	I0629 19:34:07.497047    6596 kic.go:416] container "multinode-20220629191914-2408-m02" state is running.
	I0629 19:34:07.506049    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02
	I0629 19:34:08.651700    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02: (1.1455742s)
	I0629 19:34:08.651759    6596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408\config.json ...
	I0629 19:34:08.654075    6596 machine.go:88] provisioning docker machine ...
	I0629 19:34:08.654146    6596 ubuntu.go:169] provisioning hostname "multinode-20220629191914-2408-m02"
	I0629 19:34:08.663282    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:09.799712    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1362719s)
	I0629 19:34:09.803317    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:09.804289    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:09.804359    6596 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220629191914-2408-m02 && echo "multinode-20220629191914-2408-m02" | sudo tee /etc/hostname
	I0629 19:34:10.039387    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220629191914-2408-m02
	
	I0629 19:34:10.048577    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:11.212748    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1641625s)
	I0629 19:34:11.215772    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:11.216821    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:11.216821    6596 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220629191914-2408-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220629191914-2408-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220629191914-2408-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 19:34:11.421075    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:34:11.423593    6596 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0629 19:34:11.423664    6596 ubuntu.go:177] setting up certificates
	I0629 19:34:11.423664    6596 provision.go:83] configureAuth start
	I0629 19:34:11.433579    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02
	I0629 19:34:12.550470    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02: (1.1163584s)
	I0629 19:34:12.550869    6596 provision.go:138] copyHostCerts
	I0629 19:34:12.551049    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem
	I0629 19:34:12.551343    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0629 19:34:12.551343    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0629 19:34:12.551793    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0629 19:34:12.553033    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem
	I0629 19:34:12.553033    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0629 19:34:12.553033    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0629 19:34:12.553801    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0629 19:34:12.554859    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem
	I0629 19:34:12.555137    6596 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0629 19:34:12.555241    6596 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0629 19:34:12.555755    6596 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0629 19:34:12.556585    6596 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-20220629191914-2408-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220629191914-2408-m02]
	I0629 19:34:13.185644    6596 provision.go:172] copyRemoteCerts
	I0629 19:34:13.194483    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 19:34:13.201293    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:14.308967    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.107667s)
	I0629 19:34:14.309454    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:14.458671    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2641794s)
	I0629 19:34:14.458671    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0629 19:34:14.458671    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 19:34:14.519760    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0629 19:34:14.520476    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0629 19:34:14.574226    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0629 19:34:14.574771    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 19:34:14.630329    6596 provision.go:86] duration metric: configureAuth took 3.2066444s
	I0629 19:34:14.630403    6596 ubuntu.go:193] setting minikube options for container-runtime
	I0629 19:34:14.630934    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:34:14.639397    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:15.756928    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1174239s)
	I0629 19:34:15.761337    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:15.761794    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:15.761865    6596 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 19:34:15.916757    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 19:34:15.916809    6596 ubuntu.go:71] root file system type: overlay
	I0629 19:34:15.917203    6596 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 19:34:15.924774    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:17.043168    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1183863s)
	I0629 19:34:17.049229    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:17.050008    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:17.050008    6596 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 19:34:17.270796    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 19:34:17.270796    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:18.400082    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1289572s)
	I0629 19:34:18.405095    6596 main.go:134] libmachine: Using SSH client type: native
	I0629 19:34:18.405095    6596 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 54862 <nil> <nil>}
	I0629 19:34:18.405095    6596 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 19:34:18.624674    6596 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 19:34:18.624674    6596 machine.go:91] provisioned docker machine in 9.9705325s
	I0629 19:34:18.624674    6596 start.go:306] post-start starting for "multinode-20220629191914-2408-m02" (driver="docker")
	I0629 19:34:18.624764    6596 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 19:34:18.638361    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 19:34:18.647019    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:19.777413    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1303134s)
	I0629 19:34:19.777413    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:19.923512    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2851433s)
	I0629 19:34:19.938449    6596 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 19:34:19.954543    6596 command_runner.go:130] > NAME="Ubuntu"
	I0629 19:34:19.954543    6596 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0629 19:34:19.954543    6596 command_runner.go:130] > ID=ubuntu
	I0629 19:34:19.954543    6596 command_runner.go:130] > ID_LIKE=debian
	I0629 19:34:19.954543    6596 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0629 19:34:19.954543    6596 command_runner.go:130] > VERSION_ID="20.04"
	I0629 19:34:19.954543    6596 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0629 19:34:19.954543    6596 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0629 19:34:19.954543    6596 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0629 19:34:19.954543    6596 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0629 19:34:19.954543    6596 command_runner.go:130] > VERSION_CODENAME=focal
	I0629 19:34:19.954543    6596 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0629 19:34:19.954543    6596 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 19:34:19.954543    6596 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 19:34:19.954543    6596 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 19:34:19.954543    6596 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 19:34:19.954543    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0629 19:34:19.955506    6596 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0629 19:34:19.955506    6596 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> 24082.pem in /etc/ssl/certs
	I0629 19:34:19.955506    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /etc/ssl/certs/24082.pem
	I0629 19:34:19.965492    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 19:34:19.984498    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /etc/ssl/certs/24082.pem (1708 bytes)
	I0629 19:34:20.040702    6596 start.go:309] post-start completed in 1.4153872s
	I0629 19:34:20.052103    6596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 19:34:20.059098    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:21.197602    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1384961s)
	I0629 19:34:21.197602    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:21.291720    6596 command_runner.go:130] > 5%!
	(MISSING)I0629 19:34:21.291720    6596 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2396083s)
	I0629 19:34:21.303684    6596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 19:34:21.322497    6596 command_runner.go:130] > 227G
	I0629 19:34:21.322497    6596 fix.go:57] fixHost completed within 18.140696s
	I0629 19:34:21.322497    6596 start.go:81] releasing machines lock for "multinode-20220629191914-2408-m02", held for 18.1408018s
	I0629 19:34:21.330510    6596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02
	I0629 19:34:22.470442    6596 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02: (1.1399236s)
	I0629 19:34:22.474884    6596 out.go:177] * Found network options:
	I0629 19:34:22.479633    6596 out.go:177]   - NO_PROXY=192.168.58.2
	W0629 19:34:22.481225    6596 proxy.go:118] fail to check proxy env: Error ip not in block
	I0629 19:34:22.483302    6596 out.go:177]   - no_proxy=192.168.58.2
	W0629 19:34:22.483302    6596 proxy.go:118] fail to check proxy env: Error ip not in block
	W0629 19:34:22.483302    6596 proxy.go:118] fail to check proxy env: Error ip not in block
	I0629 19:34:22.488563    6596 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 19:34:22.496108    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:22.497098    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 19:34:22.504101    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:34:23.649791    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1536758s)
	I0629 19:34:23.649791    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:23.665556    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1614113s)
	I0629 19:34:23.665968    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:34:23.865694    6596 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0629 19:34:23.865694    6596 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0629 19:34:23.865694    6596 command_runner.go:130] > <H1>302 Moved</H1>
	I0629 19:34:23.865694    6596 command_runner.go:130] > The document has moved
	I0629 19:34:23.865694    6596 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0629 19:34:23.865694    6596 command_runner.go:130] > </BODY></HTML>
	I0629 19:34:23.865694    6596 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3770489s)
	I0629 19:34:23.865694    6596 ssh_runner.go:235] Completed: sudo mkdir -p /etc/systemd/system/cri-docker.service.d: (1.3685869s)
	I0629 19:34:23.865694    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0629 19:34:23.929543    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:34:24.133218    6596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 19:34:24.354320    6596 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 19:34:24.411616    6596 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0629 19:34:24.411616    6596 command_runner.go:130] > [Unit]
	I0629 19:34:24.411616    6596 command_runner.go:130] > Description=Docker Application Container Engine
	I0629 19:34:24.411709    6596 command_runner.go:130] > Documentation=https://docs.docker.com
	I0629 19:34:24.411709    6596 command_runner.go:130] > BindsTo=containerd.service
	I0629 19:34:24.411709    6596 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0629 19:34:24.411709    6596 command_runner.go:130] > Wants=network-online.target
	I0629 19:34:24.411795    6596 command_runner.go:130] > Requires=docker.socket
	I0629 19:34:24.411795    6596 command_runner.go:130] > StartLimitBurst=3
	I0629 19:34:24.411795    6596 command_runner.go:130] > StartLimitIntervalSec=60
	I0629 19:34:24.411795    6596 command_runner.go:130] > [Service]
	I0629 19:34:24.411795    6596 command_runner.go:130] > Type=notify
	I0629 19:34:24.411795    6596 command_runner.go:130] > Restart=on-failure
	I0629 19:34:24.411795    6596 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0629 19:34:24.411795    6596 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0629 19:34:24.411795    6596 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0629 19:34:24.411795    6596 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0629 19:34:24.411926    6596 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0629 19:34:24.411926    6596 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0629 19:34:24.411968    6596 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0629 19:34:24.411968    6596 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0629 19:34:24.412028    6596 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0629 19:34:24.412028    6596 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0629 19:34:24.412028    6596 command_runner.go:130] > ExecStart=
	I0629 19:34:24.412028    6596 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0629 19:34:24.412089    6596 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0629 19:34:24.412089    6596 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0629 19:34:24.412089    6596 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0629 19:34:24.412089    6596 command_runner.go:130] > LimitNOFILE=infinity
	I0629 19:34:24.412089    6596 command_runner.go:130] > LimitNPROC=infinity
	I0629 19:34:24.412089    6596 command_runner.go:130] > LimitCORE=infinity
	I0629 19:34:24.412162    6596 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0629 19:34:24.412162    6596 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0629 19:34:24.412162    6596 command_runner.go:130] > TasksMax=infinity
	I0629 19:34:24.412162    6596 command_runner.go:130] > TimeoutStartSec=0
	I0629 19:34:24.412215    6596 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0629 19:34:24.412215    6596 command_runner.go:130] > Delegate=yes
	I0629 19:34:24.412215    6596 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0629 19:34:24.412215    6596 command_runner.go:130] > KillMode=process
	I0629 19:34:24.412215    6596 command_runner.go:130] > [Install]
	I0629 19:34:24.412284    6596 command_runner.go:130] > WantedBy=multi-user.target
	I0629 19:34:24.412284    6596 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 19:34:24.423006    6596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 19:34:24.457010    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 19:34:24.507102    6596 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:34:24.507882    6596 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0629 19:34:24.521404    6596 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 19:34:24.714553    6596 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 19:34:24.897635    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:34:25.099633    6596 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 19:34:25.851278    6596 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 19:34:26.025863    6596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 19:34:26.227617    6596 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 19:34:26.259466    6596 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 19:34:26.269154    6596 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 19:34:26.289255    6596 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0629 19:34:26.289326    6596 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0629 19:34:26.289326    6596 command_runner.go:130] > Device: 100083h/1048707d	Inode: 111         Links: 1
	I0629 19:34:26.289397    6596 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0629 19:34:26.289397    6596 command_runner.go:130] > Access: 2022-06-29 19:34:25.162301000 +0000
	I0629 19:34:26.289397    6596 command_runner.go:130] > Modify: 2022-06-29 19:34:24.162301000 +0000
	I0629 19:34:26.289397    6596 command_runner.go:130] > Change: 2022-06-29 19:34:24.162301000 +0000
	I0629 19:34:26.289467    6596 command_runner.go:130] >  Birth: -
	I0629 19:34:26.289467    6596 start.go:468] Will wait 60s for crictl version
	I0629 19:34:26.299211    6596 ssh_runner.go:195] Run: sudo crictl version
	I0629 19:34:26.382283    6596 command_runner.go:130] > Version:  0.1.0
	I0629 19:34:26.382283    6596 command_runner.go:130] > RuntimeName:  docker
	I0629 19:34:26.382283    6596 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0629 19:34:26.383342    6596 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0629 19:34:26.383342    6596 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 19:34:26.394112    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:34:26.476468    6596 command_runner.go:130] > 20.10.17
	I0629 19:34:26.485982    6596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 19:34:26.568820    6596 command_runner.go:130] > 20.10.17
	I0629 19:34:26.574780    6596 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 19:34:26.576619    6596 out.go:177]   - env NO_PROXY=192.168.58.2
	I0629 19:34:26.585436    6596 cli_runner.go:164] Run: docker exec -t multinode-20220629191914-2408-m02 dig +short host.docker.internal
	I0629 19:34:27.913847    6596 cli_runner.go:217] Completed: docker exec -t multinode-20220629191914-2408-m02 dig +short host.docker.internal: (1.3282902s)
	I0629 19:34:27.913927    6596 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 19:34:27.924106    6596 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 19:34:27.939231    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:34:27.968203    6596 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\multinode-20220629191914-2408 for IP: 192.168.58.3
	I0629 19:34:27.970738    6596 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0629 19:34:27.972888    6596 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0629 19:34:27.972888    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0629 19:34:27.972888    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0629 19:34:27.972888    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0629 19:34:27.973505    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0629 19:34:27.974027    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem (1338 bytes)
	W0629 19:34:27.974782    6596 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408_empty.pem, impossibly tiny 0 bytes
	I0629 19:34:27.974782    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0629 19:34:27.974782    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0629 19:34:27.975488    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0629 19:34:27.975488    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0629 19:34:27.976200    6596 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem (1708 bytes)
	I0629 19:34:27.976200    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> /usr/share/ca-certificates/24082.pem
	I0629 19:34:27.976838    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:27.976838    6596 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem -> /usr/share/ca-certificates/2408.pem
	I0629 19:34:27.977484    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 19:34:28.036582    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 19:34:28.091446    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 19:34:28.154840    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 19:34:28.220972    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /usr/share/ca-certificates/24082.pem (1708 bytes)
	I0629 19:34:28.275588    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 19:34:28.327354    6596 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem --> /usr/share/ca-certificates/2408.pem (1338 bytes)
	I0629 19:34:28.393235    6596 ssh_runner.go:195] Run: openssl version
	I0629 19:34:28.411390    6596 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0629 19:34:28.420381    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 19:34:28.458908    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.470916    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.470916    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.478902    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 19:34:28.498254    6596 command_runner.go:130] > b5213941
	I0629 19:34:28.507188    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 19:34:28.542725    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2408.pem && ln -fs /usr/share/ca-certificates/2408.pem /etc/ssl/certs/2408.pem"
	I0629 19:34:28.580292    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.601488    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.601488    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.612274    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2408.pem
	I0629 19:34:28.635477    6596 command_runner.go:130] > 51391683
	I0629 19:34:28.645177    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2408.pem /etc/ssl/certs/51391683.0"
	I0629 19:34:28.682332    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24082.pem && ln -fs /usr/share/ca-certificates/24082.pem /etc/ssl/certs/24082.pem"
	I0629 19:34:28.719445    6596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.736430    6596 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.736430    6596 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.750590    6596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24082.pem
	I0629 19:34:28.766593    6596 command_runner.go:130] > 3ec20f2e
	I0629 19:34:28.775594    6596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24082.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 19:34:28.806764    6596 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 19:34:28.975948    6596 command_runner.go:130] > cgroupfs
	I0629 19:34:28.976148    6596 cni.go:95] Creating CNI manager for ""
	I0629 19:34:28.976148    6596 cni.go:156] 3 nodes found, recommending kindnet
	I0629 19:34:28.976148    6596 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 19:34:28.976148    6596 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220629191914-2408 NodeName:multinode-20220629191914-2408-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 19:34:28.976148    6596 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220629191914-2408-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 19:34:28.976148    6596 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220629191914-2408-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 19:34:28.986833    6596 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 19:34:29.011169    6596 command_runner.go:130] > kubeadm
	I0629 19:34:29.011169    6596 command_runner.go:130] > kubectl
	I0629 19:34:29.011169    6596 command_runner.go:130] > kubelet
	I0629 19:34:29.013777    6596 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 19:34:29.025907    6596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0629 19:34:29.050533    6596 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (495 bytes)
	I0629 19:34:29.092747    6596 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 19:34:29.146994    6596 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0629 19:34:29.160609    6596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 19:34:29.190999    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:34:29.191968    6596 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:34:29.192015    6596 start.go:282] JoinCluster: &{Name:multinode-20220629191914-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:multinode-20220629191914-2408 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:
false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 19:34:29.192099    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0629 19:34:29.199772    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:34:30.323895    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1241161s)
	I0629 19:34:30.323895    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:30.609186    6596 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f 
	I0629 19:34:30.609186    6596 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm token create --print-join-command --ttl=0": (1.4170769s)
	I0629 19:34:30.609186    6596 start.go:295] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:30.609186    6596 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:34:30.621393    6596 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl drain multinode-20220629191914-2408-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0629 19:34:30.627382    6596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:34:31.800341    6596 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1728357s)
	I0629 19:34:31.800710    6596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54820 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:34:31.953383    6596 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0629 19:34:32.200788    6596 command_runner.go:130] ! WARNING: ignoring DaemonSet-managed Pods: kube-system/kindnet-q54ld, kube-system/kube-proxy-2mz9l
	I0629 19:34:35.238436    6596 command_runner.go:130] > node/multinode-20220629191914-2408-m02 cordoned
	I0629 19:34:35.238436    6596 command_runner.go:130] > pod "busybox-d46db594c-rbqbj" has DeletionTimestamp older than 1 seconds, skipping
	I0629 19:34:35.238436    6596 command_runner.go:130] > node/multinode-20220629191914-2408-m02 drained
	I0629 19:34:35.238436    6596 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl drain multinode-20220629191914-2408-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.6170122s)
	I0629 19:34:35.238436    6596 node.go:109] successfully drained node "m02"
	I0629 19:34:35.239533    6596 loader.go:372] Config loaded from file:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 19:34:35.240166    6596 kapi.go:59] client config for multinode-20220629191914-2408: &rest.Config{Host:"https://127.0.0.1:54819", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\profiles\\multinode-20220629191914-2408\\client.key", CAFile:"C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\ca.crt", CertData:[]u
int8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2300480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 19:34:35.240919    6596 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0629 19:34:35.240968    6596 round_trippers.go:463] DELETE https://127.0.0.1:54819/api/v1/nodes/multinode-20220629191914-2408-m02
	I0629 19:34:35.240968    6596 round_trippers.go:469] Request Headers:
	I0629 19:34:35.240968    6596 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0629 19:34:35.240968    6596 round_trippers.go:473]     Accept: application/json, */*
	I0629 19:34:35.240968    6596 round_trippers.go:473]     Content-Type: application/json
	I0629 19:34:35.252620    6596 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0629 19:34:35.252620    6596 round_trippers.go:577] Response Headers:
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Audit-Id: 28c5810d-2bf9-42ed-9e57-cafbcadc30f0
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Cache-Control: no-cache, private
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Content-Type: application/json
	I0629 19:34:35.252620    6596 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6902b7b8-6bda-44b5-8d27-d9700b2d5253
	I0629 19:34:35.252620    6596 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e710d496-6a54-4d89-bf17-22cf550b4837
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Content-Length: 184
	I0629 19:34:35.252620    6596 round_trippers.go:580]     Date: Wed, 29 Jun 2022 19:34:35 GMT
	I0629 19:34:35.253256    6596 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220629191914-2408-m02","kind":"nodes","uid":"aaf41655-3991-4e63-82df-36b045e3e43c"}}
	I0629 19:34:35.253461    6596 node.go:125] successfully deleted node "m02"
	I0629 19:34:35.253496    6596 start.go:299] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:35.253589    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:35.253735    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:34:35.358034    6596 command_runner.go:130] ! W0629 19:34:35.347249    1345 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:34:35.358097    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:34:35.412194    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:34:35.642510    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:34:35.642510    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:34:36.039874    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:34:36.040030    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.051131    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:34:36.051131    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:34:36.051131    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:34:36.051234    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:35.347249    1345 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.051234    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:34:36.051324    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:34:36.142668    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:34:36.142668    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.142668    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:36.142668    6596 retry.go:31] will retry after 9.377141872s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:35.347249    1345 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:45.534152    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:45.534234    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:34:45.622611    6596 command_runner.go:130] ! W0629 19:34:45.619485    1457 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:34:45.622611    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:34:45.671472    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:34:45.848185    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:34:45.848185    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:34:45.909019    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:34:45.909113    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:45.919286    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:34:45.919354    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:34:45.919354    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:34:45.919420    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:45.619485    1457 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:45.919495    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:34:45.919495    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:34:46.006561    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:34:46.006599    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:46.012340    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:46.013589    6596 retry.go:31] will retry after 13.869562456s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:34:45.619485    1457 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:34:59.893939    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:34:59.893939    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:35:00.013491    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:35:00.284553    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:35:00.284553    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0629 19:35:00.329128    6596 command_runner.go:130] ! W0629 19:35:00.010255    1984 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:35:00.329128    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:35:00.329128    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:35:00.329128    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0629 19:35:00.329128    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:00.010255    1984 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:00.329128    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:35:00.330946    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:35:00.410019    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:35:00.410124    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:00.419303    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:00.419338    6596 retry.go:31] will retry after 26.70351914s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:00.010255    1984 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.130241    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:35:27.130481    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:35:27.226228    6596 command_runner.go:130] ! W0629 19:35:27.222769    2245 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:35:27.226228    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:35:27.276734    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:35:27.446017    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:35:27.446017    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:35:27.551760    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:35:27.552352    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.559297    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:35:27.559297    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:35:27.559297    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:35:27.559892    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:27.222769    2245 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.559892    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:35:27.560007    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:35:27.638888    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:35:27.638966    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.647460    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:27.647460    6596 retry.go:31] will retry after 19.090249398s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:27.222769    2245 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:46.739301    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:35:46.739614    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:35:46.834894    6596 command_runner.go:130] ! W0629 19:35:46.831145    2423 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:35:46.834894    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:35:46.884700    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:35:47.044196    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:35:47.044196    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:35:47.111129    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:35:47.111129    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.119061    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:35:47.119061    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:35:47.119158    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:35:47.119236    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:46.831145    2423 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.119274    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:35:47.119274    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:35:47.196599    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:35:47.197137    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.203083    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:35:47.203083    6596 retry.go:31] will retry after 33.236287271s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:35:46.831145    2423 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.442888    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:36:20.443182    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:36:20.530213    6596 command_runner.go:130] ! W0629 19:36:20.526892    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:36:20.530325    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:36:20.584431    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:36:20.753890    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:36:20.754069    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:36:20.828783    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:36:20.828783    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.840911    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:36:20.841017    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:36:20.841045    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:36:20.841151    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:20.526892    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.841151    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:36:20.841285    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:36:20.942318    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:36:20.942318    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.951094    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:20.951094    6596 retry.go:31] will retry after 35.818171134s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:20.526892    2758 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:56.780548    6596 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0629 19:36:56.780836    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02"
	I0629 19:36:56.880463    6596 command_runner.go:130] ! W0629 19:36:56.878437    3079 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0629 19:36:56.880463    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0629 19:36:56.933515    6596 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0629 19:36:57.085863    6596 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0629 19:36:57.085965    6596 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0629 19:36:57.154133    6596 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0629 19:36:57.154133    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.161847    6596 command_runner.go:130] > [preflight] Running pre-flight checks
	I0629 19:36:57.161847    6596 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0629 19:36:57.161847    6596 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0629 19:36:57.162480    6596 start.go:305] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:56.878437    3079 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.162480    6596 start.go:308] resetting worker node "m02" before attempting to rejoin cluster...
	I0629 19:36:57.162480    6596 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force"
	I0629 19:36:57.251252    6596 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0629 19:36:57.251340    6596 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.259991    6596 start.go:310] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0629 19:36:57.259991    6596 start.go:284] JoinCluster complete in 2m28.0670235s
	I0629 19:36:57.263518    6596 out.go:177] 
	W0629 19:36:57.266395    6596 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 40y8lo.vew557zuy4a3uc94 --discovery-token-ca-cert-hash sha256:d1bbb3132d59d518934fcfefe69341edd183d4ad4fa9deb490cd3767a30d9c9f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220629191914-2408-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0629 19:36:56.878437    3079 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220629191914-2408-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 19:36:57.266395    6596 out.go:239] * 
	W0629 19:36:57.267573    6596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 19:36:57.269632    6596 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 19:33:03 UTC, end at Wed 2022-06-29 19:37:14 UTC. --
	Jun 29 19:33:21 multinode-20220629191914-2408 systemd[1]: Starting Docker Application Container Engine...
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.782293800Z" level=info msg="Starting up"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.788470200Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.788652000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.788699700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.788722000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.791486700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.791674400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.791735700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.791758200Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.815964800Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.840620400Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.840737200Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.840754200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.840764200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.840773200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.840781100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Jun 29 19:33:21 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:21.841218200Z" level=info msg="Loading containers: start."
	Jun 29 19:33:22 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:22.280704500Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 19:33:22 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:22.429128800Z" level=info msg="Loading containers: done."
	Jun 29 19:33:22 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:22.494873100Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 19:33:22 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:22.495051600Z" level=info msg="Daemon has completed initialization"
	Jun 29 19:33:22 multinode-20220629191914-2408 systemd[1]: Started Docker Application Container Engine.
	Jun 29 19:33:22 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:22.548690600Z" level=info msg="API listen on [::]:2376"
	Jun 29 19:33:22 multinode-20220629191914-2408 dockerd[667]: time="2022-06-29T19:33:22.559827800Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	d264c2dabb52f       a4ca41631cc7a                                                                                         3 minutes ago       Running             coredns                   1                   21d05e6e81f9f
	755aa1b5ca514       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   e39d99dbbd216
	6f3c64bd30f91       6fb66cd78abfe                                                                                         3 minutes ago       Running             kindnet-cni               1                   b517509338b4b
	8624727651b8e       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       1                   6244a69fd9308
	9e11ea79d007f       a634548d10b03                                                                                         3 minutes ago       Running             kube-proxy                1                   4b82bbc163a66
	f131d1bc17171       d3377ffb7177c                                                                                         3 minutes ago       Running             kube-apiserver            1                   9dc32904cafe2
	3d01650d2f2f9       34cdf99b1bb3b                                                                                         3 minutes ago       Running             kube-controller-manager   1                   55fcc8e392910
	8ebf80c1e8ca4       aebe758cef4cd                                                                                         3 minutes ago       Running             etcd                      1                   1a6b6d025ad0d
	ee10bd4e7ce8f       5d725196c1f47                                                                                         3 minutes ago       Running             kube-scheduler            1                   1b0f9df1e1993
	7b4f2e2197258       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Exited              busybox                   0                   acb67b2d6fb20
	8b3c86d0a1c51       a4ca41631cc7a                                                                                         15 minutes ago      Exited              coredns                   0                   fbf6b6b051d15
	f0ca108259345       6e38f40d628db                                                                                         15 minutes ago      Exited              storage-provisioner       0                   35d237e18d315
	4a8fd7455c696       kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c              15 minutes ago      Exited              kindnet-cni               0                   01dc6840c9afa
	a474d425b0e4c       a634548d10b03                                                                                         15 minutes ago      Exited              kube-proxy                0                   677fc6b0f18a0
	d7c2cbf716165       aebe758cef4cd                                                                                         16 minutes ago      Exited              etcd                      0                   2b45ac9da3756
	1da5e66d6e614       5d725196c1f47                                                                                         16 minutes ago      Exited              kube-scheduler            0                   2bebeee868d55
	08172ec4cee1b       34cdf99b1bb3b                                                                                         16 minutes ago      Exited              kube-controller-manager   0                   0870274494dbc
	72903587275b3       d3377ffb7177c                                                                                         16 minutes ago      Exited              kube-apiserver            0                   aafba86db1025
	
	* 
	* ==> coredns [8b3c86d0a1c5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [d264c2dabb52] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220629191914-2408
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220629191914-2408
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=multinode-20220629191914-2408
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T19_21_11_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220629191914-2408
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:37:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:33:44 +0000   Wed, 29 Jun 2022 19:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:33:44 +0000   Wed, 29 Jun 2022 19:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:33:44 +0000   Wed, 29 Jun 2022 19:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 19:33:44 +0000   Wed, 29 Jun 2022 19:21:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-20220629191914-2408
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                bbe1e1cef6e940328962dca52b3c5731
	  Boot ID:                    3343ff08-5090-4fcc-990d-809e76a24666
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-d46db594c-dnbhx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-6d4b75cb6d-6vjv2                                 100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-multinode-20220629191914-2408                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-b7v2g                                            100m (0%!)(MISSING)     100m (0%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-20220629191914-2408             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-multinode-20220629191914-2408    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-5djlc                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-20220629191914-2408             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   100m (0%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m24s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    16m (x6 over 16m)      kubelet          Node multinode-20220629191914-2408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x6 over 16m)      kubelet          Node multinode-20220629191914-2408 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  16m (x7 over 16m)      kubelet          Node multinode-20220629191914-2408 status is now: NodeHasSufficientMemory
	  Normal  Starting                 16m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                    kubelet          Node multinode-20220629191914-2408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                    kubelet          Node multinode-20220629191914-2408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                    kubelet          Node multinode-20220629191914-2408 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node multinode-20220629191914-2408 event: Registered Node multinode-20220629191914-2408 in Controller
	  Normal  NodeReady                15m                    kubelet          Node multinode-20220629191914-2408 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s (x8 over 3m40s)  kubelet          Node multinode-20220629191914-2408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x8 over 3m40s)  kubelet          Node multinode-20220629191914-2408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x7 over 3m40s)  kubelet          Node multinode-20220629191914-2408 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m15s                  node-controller  Node multinode-20220629191914-2408 event: Registered Node multinode-20220629191914-2408 in Controller
	
	
	Name:               multinode-20220629191914-2408-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220629191914-2408-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:34:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220629191914-2408-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:34:46 +0000   Wed, 29 Jun 2022 19:34:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:34:46 +0000   Wed, 29 Jun 2022 19:34:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:34:46 +0000   Wed, 29 Jun 2022 19:34:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 19:34:46 +0000   Wed, 29 Jun 2022 19:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-20220629191914-2408-m02
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                bbe1e1cef6e940328962dca52b3c5731
	  Boot ID:                    3343ff08-5090-4fcc-990d-809e76a24666
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-q54ld       100m (0%!)(MISSING)     100m (0%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-proxy-2mz9l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (0%!)(MISSING)  100m (0%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From        Message
	  ----    ------                   ----                  ----        -------
	  Normal  Starting                 13m                   kube-proxy  
	  Normal  Starting                 2m18s                 kube-proxy  
	  Normal  NodeHasSufficientMemory  13m (x8 over 14m)     kubelet     Node multinode-20220629191914-2408-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 14m)     kubelet     Node multinode-20220629191914-2408-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m6s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m6s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     3m (x7 over 3m6s)     kubelet     Node multinode-20220629191914-2408-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m53s (x8 over 3m6s)  kubelet     Node multinode-20220629191914-2408-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x8 over 3m6s)  kubelet     Node multinode-20220629191914-2408-m02 status is now: NodeHasNoDiskPressure
	
	
	Name:               multinode-20220629191914-2408-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220629191914-2408-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:31:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220629191914-2408-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:32:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 29 Jun 2022 19:31:52 +0000   Wed, 29 Jun 2022 19:34:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 29 Jun 2022 19:31:52 +0000   Wed, 29 Jun 2022 19:34:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 29 Jun 2022 19:31:52 +0000   Wed, 29 Jun 2022 19:34:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 29 Jun 2022 19:31:52 +0000   Wed, 29 Jun 2022 19:34:39 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-20220629191914-2408-m03
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                bbe1e1cef6e940328962dca52b3c5731
	  Boot ID:                    3343ff08-5090-4fcc-990d-809e76a24666
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-d46db594c-qdhrp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kindnet-wbwzc              100m (0%!)(MISSING)     100m (0%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-proxy-bccdh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (0%!)(MISSING)  100m (0%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 5m19s              kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node multinode-20220629191914-2408-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node multinode-20220629191914-2408-m03 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m23s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s              kubelet          Node multinode-20220629191914-2408-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s              kubelet          Node multinode-20220629191914-2408-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s              kubelet          Node multinode-20220629191914-2408-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m22s              kubelet          Node multinode-20220629191914-2408-m03 status is now: NodeReady
	  Normal  RegisteredNode           3m15s              node-controller  Node multinode-20220629191914-2408-m03 event: Registered Node multinode-20220629191914-2408-m03 in Controller
	  Normal  NodeNotReady             2m35s              node-controller  Node multinode-20220629191914-2408-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [Jun29 19:11] WSL2: Performing memory compaction.
	[Jun29 19:12] WSL2: Performing memory compaction.
	[Jun29 19:13] WSL2: Performing memory compaction.
	[Jun29 19:14] WSL2: Performing memory compaction.
	[Jun29 19:15] WSL2: Performing memory compaction.
	[Jun29 19:16] WSL2: Performing memory compaction.
	[Jun29 19:17] WSL2: Performing memory compaction.
	[Jun29 19:18] WSL2: Performing memory compaction.
	[Jun29 19:19] WSL2: Performing memory compaction.
	[Jun29 19:20] WSL2: Performing memory compaction.
	[Jun29 19:21] WSL2: Performing memory compaction.
	[Jun29 19:23] WSL2: Performing memory compaction.
	[Jun29 19:24] WSL2: Performing memory compaction.
	[Jun29 19:25] WSL2: Performing memory compaction.
	[Jun29 19:26] WSL2: Performing memory compaction.
	[Jun29 19:27] WSL2: Performing memory compaction.
	[Jun29 19:28] WSL2: Performing memory compaction.
	[Jun29 19:29] WSL2: Performing memory compaction.
	[Jun29 19:30] WSL2: Performing memory compaction.
	[Jun29 19:31] WSL2: Performing memory compaction.
	[Jun29 19:32] WSL2: Performing memory compaction.
	[Jun29 19:34] WSL2: Performing memory compaction.
	[Jun29 19:35] WSL2: Performing memory compaction.
	[Jun29 19:36] WSL2: Performing memory compaction.
	[Jun29 19:37] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [8ebf80c1e8ca] <==
	* {"level":"info","ts":"2022-06-29T19:33:39.821Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:33:39.821Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:33:39.823Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:33:39.823Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T19:33:39.823Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-29T19:33:39.897Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-06-29T19:33:44.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.9072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-20220629191914-2408\" ","response":"range_response_count:1 size:699"}
	{"level":"info","ts":"2022-06-29T19:33:44.513Z","caller":"traceutil/trace.go:171","msg":"trace[1660154470] range","detail":"{range_begin:/registry/csinodes/multinode-20220629191914-2408; range_end:; response_count:1; response_revision:1126; }","duration":"111.0919ms","start":"2022-06-29T19:33:44.402Z","end":"2022-06-29T19:33:44.513Z","steps":["trace[1660154470] 'agreement among raft nodes before linearized reading'  (duration: 105.3262ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T19:33:44.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.8072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3015"}
	{"level":"info","ts":"2022-06-29T19:33:44.513Z","caller":"traceutil/trace.go:171","msg":"trace[553008818] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:1126; }","duration":"113.2872ms","start":"2022-06-29T19:33:44.400Z","end":"2022-06-29T19:33:44.513Z","steps":["trace[553008818] 'agreement among raft nodes before linearized reading'  (duration: 107.1626ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T19:33:44.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.4198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" ","response":"range_response_count:2 size:1912"}
	{"level":"info","ts":"2022-06-29T19:33:44.513Z","caller":"traceutil/trace.go:171","msg":"trace[1551678423] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:1126; }","duration":"112.0329ms","start":"2022-06-29T19:33:44.401Z","end":"2022-06-29T19:33:44.513Z","steps":["trace[1551678423] 'agreement among raft nodes before linearized reading'  (duration: 105.6613ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T19:33:44.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.03ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-20220629191914-2408\" ","response":"range_response_count:1 size:4841"}
	{"level":"info","ts":"2022-06-29T19:33:44.514Z","caller":"traceutil/trace.go:171","msg":"trace[33337947] range","detail":"{range_begin:/registry/minions/multinode-20220629191914-2408; range_end:; response_count:1; response_revision:1126; }","duration":"112.9903ms","start":"2022-06-29T19:33:44.401Z","end":"2022-06-29T19:33:44.514Z","steps":["trace[33337947] 'agreement among raft nodes before linearized reading'  (duration: 106.2515ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T19:33:48.596Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.0813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2022-06-29T19:33:48.596Z","caller":"traceutil/trace.go:171","msg":"trace[83930716] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pv-protection-controller; range_end:; response_count:1; response_revision:1150; }","duration":"100.3354ms","start":"2022-06-29T19:33:48.496Z","end":"2022-06-29T19:33:48.596Z","steps":["trace[83930716] 'agreement among raft nodes before linearized reading'  (duration: 20.8203ms)","trace[83930716] 'range keys from in-memory index tree'  (duration: 79.2382ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T19:34:35.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.0589ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238512204071298366 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1254 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1043 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-29T19:34:35.703Z","caller":"traceutil/trace.go:171","msg":"trace[773697937] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1459; }","duration":"182.153ms","start":"2022-06-29T19:34:35.520Z","end":"2022-06-29T19:34:35.703Z","steps":["trace[773697937] 'read index received'  (duration: 76.7104ms)","trace[773697937] 'applied index is now lower than readState.Index'  (duration: 105.2993ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T19:34:35.703Z","caller":"traceutil/trace.go:171","msg":"trace[1361651225] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"188.8132ms","start":"2022-06-29T19:34:35.514Z","end":"2022-06-29T19:34:35.703Z","steps":["trace[1361651225] 'process raft request'  (duration: 83.203ms)","trace[1361651225] 'compare'  (duration: 99.9032ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T19:34:35.703Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"182.776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-20220629191914-2408-m02\" ","response":"range_response_count:1 size:3531"}
	{"level":"info","ts":"2022-06-29T19:34:35.703Z","caller":"traceutil/trace.go:171","msg":"trace[164447608] range","detail":"{range_begin:/registry/minions/multinode-20220629191914-2408-m02; range_end:; response_count:1; response_revision:1278; }","duration":"182.831ms","start":"2022-06-29T19:34:35.520Z","end":"2022-06-29T19:34:35.703Z","steps":["trace[164447608] 'agreement among raft nodes before linearized reading'  (duration: 182.7178ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T19:34:35.703Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"181.6918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T19:34:35.703Z","caller":"traceutil/trace.go:171","msg":"trace[915868192] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1278; }","duration":"181.7603ms","start":"2022-06-29T19:34:35.522Z","end":"2022-06-29T19:34:35.703Z","steps":["trace[915868192] 'agreement among raft nodes before linearized reading'  (duration: 181.6537ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T19:34:35.810Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.2491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-public/cluster-info\" ","response":"range_response_count:1 size:2772"}
	{"level":"info","ts":"2022-06-29T19:34:35.810Z","caller":"traceutil/trace.go:171","msg":"trace[1434085222] range","detail":"{range_begin:/registry/configmaps/kube-public/cluster-info; range_end:; response_count:1; response_revision:1279; }","duration":"101.4188ms","start":"2022-06-29T19:34:35.708Z","end":"2022-06-29T19:34:35.810Z","steps":["trace[1434085222] 'agreement among raft nodes before linearized reading'  (duration: 92.7187ms)"],"step_count":1}
	
	* 
	* ==> etcd [d7c2cbf71616] <==
	* {"level":"info","ts":"2022-06-29T19:25:18.018Z","caller":"traceutil/trace.go:171","msg":"trace[2067416986] linearizableReadLoop","detail":"{readStateIndex:733; appliedIndex:732; }","duration":"314.245ms","start":"2022-06-29T19:25:17.703Z","end":"2022-06-29T19:25:18.018Z","steps":["trace[2067416986] 'read index received'  (duration: 249.989ms)","trace[2067416986] 'applied index is now lower than readState.Index'  (duration: 64.2519ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T19:25:18.018Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"314.3971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T19:25:18.018Z","caller":"traceutil/trace.go:171","msg":"trace[771179976] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:671; }","duration":"314.4253ms","start":"2022-06-29T19:25:17.703Z","end":"2022-06-29T19:25:18.018Z","steps":["trace[771179976] 'agreement among raft nodes before linearized reading'  (duration: 314.3184ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T19:25:18.018Z","caller":"traceutil/trace.go:171","msg":"trace[1445328071] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"543.9791ms","start":"2022-06-29T19:25:17.474Z","end":"2022-06-29T19:25:18.018Z","steps":["trace[1445328071] 'process raft request'  (duration: 202.6218ms)","trace[1445328071] 'compare'  (duration: 340.7625ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T19:25:18.018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T19:25:17.703Z","time spent":"314.468ms","remote":"127.0.0.1:34396","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-29T19:25:18.018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T19:25:17.474Z","time spent":"544.0613ms","remote":"127.0.0.1:34346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:663 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238512203876635646 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >"}
	{"level":"info","ts":"2022-06-29T19:26:12.014Z","caller":"traceutil/trace.go:171","msg":"trace[1502051435] transaction","detail":"{read_only:false; response_revision:732; number_of_response:1; }","duration":"100.5461ms","start":"2022-06-29T19:26:11.914Z","end":"2022-06-29T19:26:12.014Z","steps":["trace[1502051435] 'process raft request'  (duration: 100.1401ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T19:26:12.015Z","caller":"traceutil/trace.go:171","msg":"trace[1019741578] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"100.7121ms","start":"2022-06-29T19:26:11.914Z","end":"2022-06-29T19:26:12.015Z","steps":["trace[1019741578] 'process raft request'  (duration: 100.1885ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T19:26:12.015Z","caller":"traceutil/trace.go:171","msg":"trace[1819103982] transaction","detail":"{read_only:false; response_revision:734; number_of_response:1; }","duration":"100.5822ms","start":"2022-06-29T19:26:11.914Z","end":"2022-06-29T19:26:12.015Z","steps":["trace[1819103982] 'process raft request'  (duration: 100.243ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T19:26:12.015Z","caller":"traceutil/trace.go:171","msg":"trace[909866025] transaction","detail":"{read_only:false; response_revision:731; number_of_response:1; }","duration":"101.3122ms","start":"2022-06-29T19:26:11.913Z","end":"2022-06-29T19:26:12.015Z","steps":["trace[909866025] 'process raft request'  (duration: 81.2615ms)","trace[909866025] 'compare'  (duration: 19.1576ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T19:26:23.804Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T19:26:23.804Z","caller":"traceutil/trace.go:171","msg":"trace[1241537483] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:762; }","duration":"100.9442ms","start":"2022-06-29T19:26:23.703Z","end":"2022-06-29T19:26:23.804Z","steps":["trace[1241537483] 'range keys from in-memory index tree'  (duration: 100.6883ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T19:26:25.553Z","caller":"traceutil/trace.go:171","msg":"trace[305665448] transaction","detail":"{read_only:false; response_revision:767; number_of_response:1; }","duration":"166.5949ms","start":"2022-06-29T19:26:25.386Z","end":"2022-06-29T19:26:25.553Z","steps":["trace[305665448] 'process raft request'  (duration: 160.7851ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T19:30:59.706Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":711}
	{"level":"info","ts":"2022-06-29T19:30:59.709Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":711,"took":"2.8265ms"}
	{"level":"info","ts":"2022-06-29T19:32:18.208Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T19:32:18.208Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-20220629191914-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/29 19:32:18 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 19:32:18 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2022/06/29 19:32:18 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T19:32:18.398Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-29T19:32:18.398Z","caller":"traceutil/trace.go:171","msg":"trace[2096160876] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1279; }","duration":"192.7996ms","start":"2022-06-29T19:32:18.205Z","end":"2022-06-29T19:32:18.397Z","steps":["trace[2096160876] 'read index received'  (duration: 192.7895ms)","trace[2096160876] 'applied index is now lower than readState.Index'  (duration: 6.8µs)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T19:32:18.498Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-29T19:32:18.499Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-29T19:32:18.499Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-20220629191914-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  19:37:14 up  1:45,  0 users,  load average: 0.40, 0.91, 0.99
	Linux multinode-20220629191914-2408 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [72903587275b] <==
	* W0629 19:32:27.695975       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.725231       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.736108       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.774583       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.803164       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.814951       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.832561       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.870515       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.890221       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.901823       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.928625       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.935762       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.942510       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:27.970713       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.037742       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.053906       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.054194       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.076585       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.105344       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.119890       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.146389       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.166214       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.191910       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.207533       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:32:28.221513       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [f131d1bc1717] <==
	* I0629 19:33:44.200065       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0629 19:33:44.200093       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0629 19:33:44.200122       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0629 19:33:44.197433       1 available_controller.go:491] Starting AvailableConditionController
	I0629 19:33:44.214262       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0629 19:33:44.197482       1 autoregister_controller.go:141] Starting autoregister controller
	I0629 19:33:44.214292       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0629 19:33:44.197495       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0629 19:33:44.300662       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0629 19:33:44.301384       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0629 19:33:44.396204       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0629 19:33:44.396375       1 cache.go:39] Caches are synced for autoregister controller
	I0629 19:33:44.396481       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0629 19:33:44.397118       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0629 19:33:44.397831       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0629 19:33:44.409464       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 19:33:44.796897       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0629 19:33:45.204559       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0629 19:33:50.513596       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 19:33:52.101874       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 19:33:53.343371       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 19:33:53.399407       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 19:33:53.545822       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0629 19:33:53.598829       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0629 19:34:09.137149       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [08172ec4cee1] <==
	* I0629 19:23:16.565693       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2mz9l"
	I0629 19:23:16.572783       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q54ld"
	W0629 19:23:20.023670       1 node_lifecycle_controller.go:1014] Missing timestamp for Node multinode-20220629191914-2408-m02. Assuming now as a timestamp.
	I0629 19:23:20.023973       1 event.go:294] "Event occurred" object="multinode-20220629191914-2408-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220629191914-2408-m02 event: Registered Node multinode-20220629191914-2408-m02 in Controller"
	W0629 19:23:36.938620       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	I0629 19:23:52.912987       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-d46db594c to 2"
	I0629 19:23:52.929414       1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-rbqbj"
	I0629 19:23:52.935183       1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-dnbhx"
	W0629 19:26:11.704944       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	W0629 19:26:11.705094       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220629191914-2408-m03" does not exist
	I0629 19:26:11.895029       1 range_allocator.go:374] Set node multinode-20220629191914-2408-m03 PodCIDR to [10.244.2.0/24]
	I0629 19:26:11.899115       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bccdh"
	I0629 19:26:11.911538       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wbwzc"
	W0629 19:26:15.080738       1 node_lifecycle_controller.go:1014] Missing timestamp for Node multinode-20220629191914-2408-m03. Assuming now as a timestamp.
	I0629 19:26:15.080917       1 event.go:294] "Event occurred" object="multinode-20220629191914-2408-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220629191914-2408-m03 event: Registered Node multinode-20220629191914-2408-m03 in Controller"
	W0629 19:26:15.707122       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	W0629 19:31:20.190818       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	I0629 19:31:20.191225       1 event.go:294] "Event occurred" object="multinode-20220629191914-2408-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220629191914-2408-m03 status is now: NodeNotReady"
	I0629 19:31:20.201763       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-bccdh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0629 19:31:20.223761       1 event.go:294] "Event occurred" object="kube-system/kindnet-wbwzc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0629 19:31:50.852989       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	W0629 19:31:52.139059       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220629191914-2408-m03" does not exist
	W0629 19:31:52.139276       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	I0629 19:31:52.160920       1 range_allocator.go:374] Set node multinode-20220629191914-2408-m03 PodCIDR to [10.244.3.0/24]
	W0629 19:31:52.511265       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	
	* 
	* ==> kube-controller-manager [3d01650d2f2f] <==
	* W0629 19:33:59.202942       1 node_lifecycle_controller.go:1014] Missing timestamp for Node multinode-20220629191914-2408. Assuming now as a timestamp.
	W0629 19:33:59.203319       1 node_lifecycle_controller.go:1014] Missing timestamp for Node multinode-20220629191914-2408-m02. Assuming now as a timestamp.
	W0629 19:33:59.203395       1 node_lifecycle_controller.go:1014] Missing timestamp for Node multinode-20220629191914-2408-m03. Assuming now as a timestamp.
	I0629 19:33:59.203465       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0629 19:33:59.203863       1 shared_informer.go:262] Caches are synced for expand
	I0629 19:33:59.209215       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0629 19:33:59.209931       1 shared_informer.go:262] Caches are synced for cronjob
	I0629 19:33:59.211949       1 shared_informer.go:262] Caches are synced for stateful set
	I0629 19:33:59.211991       1 shared_informer.go:262] Caches are synced for PVC protection
	I0629 19:33:59.296752       1 shared_informer.go:262] Caches are synced for attach detach
	I0629 19:33:59.297066       1 shared_informer.go:262] Caches are synced for persistent volume
	I0629 19:33:59.297085       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:33:59.300625       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:33:59.711284       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:33:59.711419       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:33:59.711439       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 19:34:32.316081       1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-qdhrp"
	W0629 19:34:35.250167       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m03 node
	W0629 19:34:35.435888       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	W0629 19:34:35.437245       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220629191914-2408-m02" does not exist
	I0629 19:34:35.444699       1 range_allocator.go:374] Set node multinode-20220629191914-2408-m02 PodCIDR to [10.244.1.0/24]
	W0629 19:34:39.232017       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220629191914-2408-m02 node
	I0629 19:34:39.232024       1 event.go:294] "Event occurred" object="multinode-20220629191914-2408-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220629191914-2408-m03 status is now: NodeNotReady"
	I0629 19:34:39.243331       1 event.go:294] "Event occurred" object="kube-system/kindnet-wbwzc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0629 19:34:39.252523       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-bccdh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [9e11ea79d007] <==
	* I0629 19:33:49.800763       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 19:33:49.803886       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 19:33:49.807340       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 19:33:49.813009       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 19:33:49.900331       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 19:33:50.098573       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0629 19:33:50.098719       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0629 19:33:50.098774       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:33:50.499205       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:33:50.499322       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:33:50.499339       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:33:50.499356       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:33:50.499399       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:33:50.500110       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:33:50.500425       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:33:50.500465       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:33:50.503366       1 config.go:317] "Starting service config controller"
	I0629 19:33:50.503986       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:33:50.507202       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:33:50.507429       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:33:50.507406       1 config.go:444] "Starting node config controller"
	I0629 19:33:50.507488       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:33:50.605515       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:33:50.608028       1 shared_informer.go:262] Caches are synced for node config
	I0629 19:33:50.608247       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [a474d425b0e4] <==
	* I0629 19:21:23.342506       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 19:21:23.392848       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 19:21:23.396034       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 19:21:23.399785       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 19:21:23.403420       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 19:21:23.426114       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0629 19:21:23.426311       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0629 19:21:23.426348       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:21:23.527851       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:21:23.528026       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:21:23.528049       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:21:23.528075       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:21:23.528116       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:21:23.528721       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:21:23.529497       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:21:23.529518       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:21:23.530737       1 config.go:317] "Starting service config controller"
	I0629 19:21:23.530870       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:21:23.530935       1 config.go:444] "Starting node config controller"
	I0629 19:21:23.530950       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:21:23.530986       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:21:23.530997       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:21:23.631594       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 19:21:23.631741       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:21:23.631762       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1da5e66d6e61] <==
	* E0629 19:21:05.359783       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0629 19:21:05.367309       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:21:05.367423       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:21:05.552329       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 19:21:05.552451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 19:21:05.629295       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:21:05.629482       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:21:05.649612       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 19:21:05.649730       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 19:21:05.694404       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 19:21:05.694556       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 19:21:05.706472       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 19:21:05.706644       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 19:21:05.709603       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 19:21:05.709713       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 19:21:05.726405       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0629 19:21:05.726519       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0629 19:21:05.746978       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0629 19:21:05.747095       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0629 19:21:05.794620       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0629 19:21:05.794743       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0629 19:21:07.704486       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 19:32:18.201419       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0629 19:32:18.201649       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 19:32:18.202046       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [ee10bd4e7ce8] <==
	* I0629 19:33:40.707427       1 serving.go:348] Generated self-signed cert in-memory
	I0629 19:33:44.696493       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 19:33:44.696544       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:33:44.703157       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 19:33:44.703262       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 19:33:44.703274       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 19:33:44.703301       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 19:33:44.703210       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0629 19:33:44.704340       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0629 19:33:44.703239       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0629 19:33:44.704425       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0629 19:33:44.804493       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0629 19:33:44.804921       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0629 19:33:44.804952       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 19:33:03 UTC, end at Wed 2022-06-29 19:37:15 UTC. --
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.510886    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/734589bd-4941-4bad-bf82-8782fba95fb0-lib-modules\") pod \"kube-proxy-5djlc\" (UID: \"734589bd-4941-4bad-bf82-8782fba95fb0\") " pod="kube-system/kube-proxy-5djlc"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511054    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/734589bd-4941-4bad-bf82-8782fba95fb0-kube-proxy\") pod \"kube-proxy-5djlc\" (UID: \"734589bd-4941-4bad-bf82-8782fba95fb0\") " pod="kube-system/kube-proxy-5djlc"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511107    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wcb2\" (UniqueName: \"kubernetes.io/projected/734589bd-4941-4bad-bf82-8782fba95fb0-kube-api-access-4wcb2\") pod \"kube-proxy-5djlc\" (UID: \"734589bd-4941-4bad-bf82-8782fba95fb0\") " pod="kube-system/kube-proxy-5djlc"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511166    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ad5ec42d-16a3-429c-a3d7-c08eeb03dcae-tmp\") pod \"storage-provisioner\" (UID: \"ad5ec42d-16a3-429c-a3d7-c08eeb03dcae\") " pod="kube-system/storage-provisioner"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511337    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9febc0b9-2af4-478d-acca-bb892672edc1-cni-cfg\") pod \"kindnet-b7v2g\" (UID: \"9febc0b9-2af4-478d-acca-bb892672edc1\") " pod="kube-system/kindnet-b7v2g"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511403    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9febc0b9-2af4-478d-acca-bb892672edc1-xtables-lock\") pod \"kindnet-b7v2g\" (UID: \"9febc0b9-2af4-478d-acca-bb892672edc1\") " pod="kube-system/kindnet-b7v2g"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511450    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9febc0b9-2af4-478d-acca-bb892672edc1-lib-modules\") pod \"kindnet-b7v2g\" (UID: \"9febc0b9-2af4-478d-acca-bb892672edc1\") " pod="kube-system/kindnet-b7v2g"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511505    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgh9\" (UniqueName: \"kubernetes.io/projected/ad5ec42d-16a3-429c-a3d7-c08eeb03dcae-kube-api-access-wvgh9\") pod \"storage-provisioner\" (UID: \"ad5ec42d-16a3-429c-a3d7-c08eeb03dcae\") " pod="kube-system/storage-provisioner"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511587    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89nw\" (UniqueName: \"kubernetes.io/projected/957527e4-431b-450f-b20f-ead3b2989f97-kube-api-access-g89nw\") pod \"coredns-6d4b75cb6d-6vjv2\" (UID: \"957527e4-431b-450f-b20f-ead3b2989f97\") " pod="kube-system/coredns-6d4b75cb6d-6vjv2"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511648    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/734589bd-4941-4bad-bf82-8782fba95fb0-xtables-lock\") pod \"kube-proxy-5djlc\" (UID: \"734589bd-4941-4bad-bf82-8782fba95fb0\") " pod="kube-system/kube-proxy-5djlc"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511713    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7n29\" (UniqueName: \"kubernetes.io/projected/824357d6-1a97-4244-b9d3-697c58b2a727-kube-api-access-w7n29\") pod \"busybox-d46db594c-dnbhx\" (UID: \"824357d6-1a97-4244-b9d3-697c58b2a727\") " pod="default/busybox-d46db594c-dnbhx"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511769    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/957527e4-431b-450f-b20f-ead3b2989f97-config-volume\") pod \"coredns-6d4b75cb6d-6vjv2\" (UID: \"957527e4-431b-450f-b20f-ead3b2989f97\") " pod="kube-system/coredns-6d4b75cb6d-6vjv2"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511913    1267 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6k4\" (UniqueName: \"kubernetes.io/projected/9febc0b9-2af4-478d-acca-bb892672edc1-kube-api-access-sb6k4\") pod \"kindnet-b7v2g\" (UID: \"9febc0b9-2af4-478d-acca-bb892672edc1\") " pod="kube-system/kindnet-b7v2g"
	Jun 29 19:33:45 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:45.511949    1267 reconciler.go:157] "Reconciler: start to sync state"
	Jun 29 19:33:46 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:46.698926    1267 request.go:601] Waited for 1.0851659s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token
	Jun 29 19:33:47 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:47.410111    1267 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4b82bbc163a665cfbfe9f1d4ddeae10aa6282fe6081780d7b89770098824c268"
	Jun 29 19:33:47 multinode-20220629191914-2408 kubelet[1267]: E0629 19:33:47.411657    1267 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jun 29 19:33:47 multinode-20220629191914-2408 kubelet[1267]: E0629 19:33:47.411731    1267 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jun 29 19:33:50 multinode-20220629191914-2408 kubelet[1267]: I0629 19:33:50.702802    1267 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e39d99dbbd216a9af37c9dca50afede40c417204b0892882c1f32fad544f6856"
	Jun 29 19:34:00 multinode-20220629191914-2408 kubelet[1267]: E0629 19:34:00.113945    1267 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jun 29 19:34:00 multinode-20220629191914-2408 kubelet[1267]: E0629 19:34:00.114104    1267 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jun 29 19:34:12 multinode-20220629191914-2408 kubelet[1267]: E0629 19:34:12.649797    1267 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jun 29 19:34:12 multinode-20220629191914-2408 kubelet[1267]: E0629 19:34:12.649955    1267 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	Jun 29 19:34:25 multinode-20220629191914-2408 kubelet[1267]: E0629 19:34:25.401140    1267 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Jun 29 19:34:25 multinode-20220629191914-2408 kubelet[1267]: E0629 19:34:25.401274    1267 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	
	* 
	* ==> storage-provisioner [8624727651b8] <==
	* I0629 19:33:51.407044       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:33:51.507870       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:33:51.508039       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:34:09.141758       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:34:09.142195       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220629191914-2408_4453f968-5034-4942-8b1e-d245782e6989!
	I0629 19:34:09.142752       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4aa6f227-9b3a-4f3e-9db9-03b97e9f203f", APIVersion:"v1", ResourceVersion:"1224", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220629191914-2408_4453f968-5034-4942-8b1e-d245782e6989 became leader
	I0629 19:34:09.244055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220629191914-2408_4453f968-5034-4942-8b1e-d245782e6989!
	
	* 
	* ==> storage-provisioner [f0ca10825934] <==
	* I0629 19:21:42.226680       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:21:42.309889       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:21:42.310050       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:21:42.401836       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:21:42.402068       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4aa6f227-9b3a-4f3e-9db9-03b97e9f203f", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220629191914-2408_b1b63b70-cffd-4a76-9824-97677ced405f became leader
	I0629 19:21:42.402108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220629191914-2408_b1b63b70-cffd-4a76-9824-97677ced405f!
	I0629 19:21:42.502384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220629191914-2408_b1b63b70-cffd-4a76-9824-97677ced405f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-20220629191914-2408 -n multinode-20220629191914-2408
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-20220629191914-2408 -n multinode-20220629191914-2408: (7.2293507s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-20220629191914-2408 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-d46db594c-qdhrp
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-20220629191914-2408 describe pod busybox-d46db594c-qdhrp
helpers_test.go:280: (dbg) kubectl --context multinode-20220629191914-2408 describe pod busybox-d46db594c-qdhrp:

                                                
                                                
-- stdout --
	Name:           busybox-d46db594c-qdhrp
	Namespace:      default
	Priority:       0
	Node:           multinode-20220629191914-2408-m03/
	Labels:         app=busybox
	                pod-template-hash=d46db594c
	Annotations:    <none>
	Status:         Pending
	IP:             
	IPs:            <none>
	Controlled By:  ReplicaSet/busybox-d46db594c
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdt59 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-mdt59:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
	                             node.kubernetes.io/unreachable:NoExecute for 300s
	Events:
	  Type    Reason     Age        From  Message
	  ----    ------     ----       ----  -------
	  Normal  Scheduled  <unknown>        Successfully assigned default/busybox-d46db594c-qdhrp to multinode-20220629191914-2408-m03

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (313.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (60.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --no-kubernetes --driver=docker
E0629 20:00:17.147066    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 20:00:17.479893    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --no-kubernetes --driver=docker: exit status 1 (51.3959682s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220629195545-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting minikube without Kubernetes NoKubernetes-20220629195545-2408 in cluster NoKubernetes-20220629195545-2408
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --no-kubernetes --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220629195545-2408
helpers_test.go:231: (dbg) Done: docker inspect NoKubernetes-20220629195545-2408: (1.2567182s)
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20220629195545-2408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e1e3f6f56c9846abcf164b91e2ef6a6602b1dfc8b43ee28dc3d283ced540d36",
	        "Created": "2022-06-29T20:00:41.6994048Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194934,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T20:00:42.8596767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/3e1e3f6f56c9846abcf164b91e2ef6a6602b1dfc8b43ee28dc3d283ced540d36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e1e3f6f56c9846abcf164b91e2ef6a6602b1dfc8b43ee28dc3d283ced540d36/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e1e3f6f56c9846abcf164b91e2ef6a6602b1dfc8b43ee28dc3d283ced540d36/hosts",
	        "LogPath": "/var/lib/docker/containers/3e1e3f6f56c9846abcf164b91e2ef6a6602b1dfc8b43ee28dc3d283ced540d36/3e1e3f6f56c9846abcf164b91e2ef6a6602b1dfc8b43ee28dc3d283ced540d36-json.log",
	        "Name": "/NoKubernetes-20220629195545-2408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-20220629195545-2408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-20220629195545-2408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 17091788800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 17091788800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ecd0fc77d8b3b66e0d9362fa6e31643202f5c2a25ebacd415d1d490b750cbd2-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ecd0fc77d8b3b66e0d9362fa6e31643202f5c2a25ebacd415d1d490b750cbd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ecd0fc77d8b3b66e0d9362fa6e31643202f5c2a25ebacd415d1d490b750cbd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ecd0fc77d8b3b66e0d9362fa6e31643202f5c2a25ebacd415d1d490b750cbd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-20220629195545-2408",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-20220629195545-2408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-20220629195545-2408",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-20220629195545-2408",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-20220629195545-2408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "48f3769aba67a7dc5710c2415e850d0234698476927a846665a99da46b5c2a0a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55630"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55631"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55632"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55633"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55634"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/48f3769aba67",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-20220629195545-2408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3e1e3f6f56c9",
	                        "NoKubernetes-20220629195545-2408"
	                    ],
	                    "NetworkID": "0404f87d1e336d222009cd71df2e209c502edf75a743a7a1d48ee776049434f8",
	                    "EndpointID": "0fe47b3780e9b0dd8e6806a5e2b1b96edc84c7b1be219280f0fdb964d208845c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220629195545-2408 -n NoKubernetes-20220629195545-2408
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220629195545-2408 -n NoKubernetes-20220629195545-2408: exit status 3 (7.3632369s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 20:00:54.651892    5384 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new native config from ssh using: docker, &{[] [C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-20220629195545-2408\id_rsa]}: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-20220629195545-2408\id_rsa: The system cannot find the file specified.
	E0629 20:00:54.651979    5384 status.go:247] status error: NewSession: new client: new client: Error creating new native config from ssh using: docker, &{[] [C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-20220629195545-2408\id_rsa]}: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-20220629195545-2408\id_rsa: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "NoKubernetes-20220629195545-2408" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/Start (60.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (78.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220629201430-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220629201430-2408 --alsologtostderr -v=1: exit status 80 (10.1354223s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20220629201430-2408 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 20:25:32.127765    9856 out.go:296] Setting OutFile to fd 1648 ...
	I0629 20:25:32.192086    9856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:25:32.192086    9856 out.go:309] Setting ErrFile to fd 1604...
	I0629 20:25:32.192086    9856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:25:32.204646    9856 out.go:303] Setting JSON to false
	I0629 20:25:32.204697    9856 mustload.go:65] Loading cluster: default-k8s-different-port-20220629201430-2408
	I0629 20:25:32.205232    9856 config.go:178] Loaded profile config "default-k8s-different-port-20220629201430-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:25:32.221479    9856 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629201430-2408 --format={{.State.Status}}
	I0629 20:25:35.438018    9856 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220629201430-2408 --format={{.State.Status}}: (3.2165184s)
	I0629 20:25:35.438018    9856 host.go:66] Checking if "default-k8s-different-port-20220629201430-2408" exists ...
	I0629 20:25:35.445002    9856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629201430-2408
	I0629 20:25:36.793627    9856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629201430-2408: (1.3486161s)
	I0629 20:25:36.794655    9856 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks
:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/14420/minikube-v1.26.0-1656448385-14420-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.26.0-1656448385-14420/minikube-v1.26.0-1656448385-14420-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.26.0-1656448385-14420-amd64.iso https://storage.googleapis.com/minikube-builds/iso/14420/minikube-v1.26.0-1656448385-14420.iso https://github.com/kubernetes/minikube/releases/download/v1.26.0-1656448385-14420/minikube-v1.26.0-1656448385-14420.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.
com/minikube/iso/minikube-v1.26.0-1656448385-14420.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube8:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20220629201430-2408 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s
(int=22) ssh-user:root subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0629 20:25:36.824586    9856 out.go:177] * Pausing node default-k8s-different-port-20220629201430-2408 ... 
	I0629 20:25:36.830683    9856 host.go:66] Checking if "default-k8s-different-port-20220629201430-2408" exists ...
	I0629 20:25:36.843058    9856 ssh_runner.go:195] Run: systemctl --version
	I0629 20:25:36.851382    9856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629201430-2408
	I0629 20:25:38.171236    9856 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629201430-2408: (1.3198451s)
	I0629 20:25:38.171236    9856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57167 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\default-k8s-different-port-20220629201430-2408\id_rsa Username:docker}
	I0629 20:25:38.343976    9856 ssh_runner.go:235] Completed: systemctl --version: (1.5009081s)
	I0629 20:25:38.360356    9856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 20:25:38.397611    9856 pause.go:50] kubelet running: true
	I0629 20:25:38.409659    9856 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0629 20:25:38.821630    9856 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0629 20:25:38.907893    9856 docker.go:451] Pausing containers: [10a2f9728da8 28e64e74c203 3ac48d7c9825 da19ab351329 e8d81798dfd1 042cc5d77d97 3ced1f4604b4 760bd2bff2d8 5fa0db32f182 867c6c31b956 eedd9268a51a 580dee3616f7 cd6efa75b070 e33ea872250b 675ceed3bcb2 3caf37f29e67 9fa8d6123d68 c500fb2ec9ed]
	I0629 20:25:38.923184    9856 ssh_runner.go:195] Run: docker pause 10a2f9728da8 28e64e74c203 3ac48d7c9825 da19ab351329 e8d81798dfd1 042cc5d77d97 3ced1f4604b4 760bd2bff2d8 5fa0db32f182 867c6c31b956 eedd9268a51a 580dee3616f7 cd6efa75b070 e33ea872250b 675ceed3bcb2 3caf37f29e67 9fa8d6123d68 c500fb2ec9ed
	I0629 20:25:41.255598    9856 ssh_runner.go:235] Completed: docker pause 10a2f9728da8 28e64e74c203 3ac48d7c9825 da19ab351329 e8d81798dfd1 042cc5d77d97 3ced1f4604b4 760bd2bff2d8 5fa0db32f182 867c6c31b956 eedd9268a51a 580dee3616f7 cd6efa75b070 e33ea872250b 675ceed3bcb2 3caf37f29e67 9fa8d6123d68 c500fb2ec9ed: (2.33228s)
	I0629 20:25:41.264055    9856 out.go:177] 
	W0629 20:25:41.267440    9856 out.go:239] X Exiting due to GUEST_PAUSE: docker: docker pause 10a2f9728da8 28e64e74c203 3ac48d7c9825 da19ab351329 e8d81798dfd1 042cc5d77d97 3ced1f4604b4 760bd2bff2d8 5fa0db32f182 867c6c31b956 eedd9268a51a 580dee3616f7 cd6efa75b070 e33ea872250b 675ceed3bcb2 3caf37f29e67 9fa8d6123d68 c500fb2ec9ed: Process exited with status 1
	stdout:
	10a2f9728da8
	28e64e74c203
	3ac48d7c9825
	da19ab351329
	e8d81798dfd1
	042cc5d77d97
	3ced1f4604b4
	760bd2bff2d8
	5fa0db32f182
	867c6c31b956
	eedd9268a51a
	cd6efa75b070
	e33ea872250b
	675ceed3bcb2
	3caf37f29e67
	9fa8d6123d68
	c500fb2ec9ed
	
	stderr:
	Error response from daemon: Cannot pause container 580dee3616f7de3a4036a12852d21224be899c0836b478626275102a8dc652f4: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: docker: docker pause 10a2f9728da8 28e64e74c203 3ac48d7c9825 da19ab351329 e8d81798dfd1 042cc5d77d97 3ced1f4604b4 760bd2bff2d8 5fa0db32f182 867c6c31b956 eedd9268a51a 580dee3616f7 cd6efa75b070 e33ea872250b 675ceed3bcb2 3caf37f29e67 9fa8d6123d68 c500fb2ec9ed: Process exited with status 1
	stdout:
	10a2f9728da8
	28e64e74c203
	3ac48d7c9825
	da19ab351329
	e8d81798dfd1
	042cc5d77d97
	3ced1f4604b4
	760bd2bff2d8
	5fa0db32f182
	867c6c31b956
	eedd9268a51a
	cd6efa75b070
	e33ea872250b
	675ceed3bcb2
	3caf37f29e67
	9fa8d6123d68
	c500fb2ec9ed
	
	stderr:
	Error response from daemon: Cannot pause container 580dee3616f7de3a4036a12852d21224be899c0836b478626275102a8dc652f4: OCI runtime pause failed: unable to freeze: unknown
	
	W0629 20:25:41.267440    9856 out.go:239] * 
	* 
	W0629 20:25:41.945861    9856 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_20.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_20.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 20:25:41.949935    9856 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220629201430-2408 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220629201430-2408
helpers_test.go:231: (dbg) Done: docker inspect default-k8s-different-port-20220629201430-2408: (1.1526652s)
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220629201430-2408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35",
	        "Created": "2022-06-29T20:15:30.1215786Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T20:18:14.1128585Z",
	            "FinishedAt": "2022-06-29T20:17:51.0691308Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/hostname",
	        "HostsPath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/hosts",
	        "LogPath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35-json.log",
	        "Name": "/default-k8s-different-port-20220629201430-2408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220629201430-2408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220629201430-2408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220629201430-2408",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220629201430-2408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220629201430-2408",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220629201430-2408",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220629201430-2408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "47364ffaf80da4e8be1fb2d0a2d4d4d433d03e56b665a75a1638e08a29c503ef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57170"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57171"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/47364ffaf80d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220629201430-2408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "07c14fc79243",
	                        "default-k8s-different-port-20220629201430-2408"
	                    ],
	                    "NetworkID": "d88bb354aff4ce91db16a4dcfedbd3ffa4db13d0bb2d32411fdb12923981dd82",
	                    "EndpointID": "71cfec67be98c9ad7e298190be5f0257a3b42161420b711894064d4b376f4b99",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408: exit status 2 (7.2928676s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-different-port-20220629201430-2408 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-different-port-20220629201430-2408 logs -n 25: (18.2102497s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:16 GMT | 29 Jun 22 20:16 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |                   |         |                     |                     |
	| start   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:16 GMT | 29 Jun 22 20:23 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | --memory=2200                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr                                          |          |                   |         |                     |                     |
	|         | --wait=true --preload=false                                |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:16 GMT | 29 Jun 22 20:16 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |                   |         |                     |                     |
	| start   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:16 GMT | 29 Jun 22 20:23 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |                   |         |                     |                     |
	|         | --wait=true --embed-certs                                  |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |                   |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:17 GMT | 29 Jun 22 20:17 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |                   |         |                     |                     |
	| stop    | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:17 GMT | 29 Jun 22 20:17 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:17 GMT | 29 Jun 22 20:18 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |                   |         |                     |                     |
	| start   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:18 GMT | 29 Jun 22 20:24 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |                   |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:23 GMT | 29 Jun 22 20:23 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:23 GMT | 29 Jun 22 20:23 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| unpause | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| unpause | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:25 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	| unpause | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:25 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	| start   | -p newest-cni-20220629202523-2408 --memory=2200            | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.24.2               |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 20:25:23
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 20:25:23.424100    4664 out.go:296] Setting OutFile to fd 1844 ...
	I0629 20:25:23.482455    4664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:25:23.482455    4664 out.go:309] Setting ErrFile to fd 1676...
	I0629 20:25:23.482455    4664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:25:23.506425    4664 out.go:303] Setting JSON to false
	I0629 20:25:23.510406    4664 start.go:115] hostinfo: {"hostname":"minikube8","uptime":26885,"bootTime":1656507438,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 20:25:23.510587    4664 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 20:25:23.516269    4664 out.go:177] * [newest-cni-20220629202523-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 20:25:23.520576    4664 notify.go:193] Checking for updates...
	I0629 20:25:23.523189    4664 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:25:23.526133    4664 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 20:25:23.528825    4664 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 20:25:23.531272    4664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 20:25:23.536489    4664 config.go:178] Loaded profile config "default-k8s-different-port-20220629201430-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:25:23.537215    4664 config.go:178] Loaded profile config "no-preload-20220629201225-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:25:23.537215    4664 config.go:178] Loaded profile config "old-k8s-version-20220629201126-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 20:25:23.538218    4664 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 20:25:27.192408    4664 docker.go:137] docker version: linux-20.10.16
	I0629 20:25:27.199992    4664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:25:29.465782    4664 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2656818s)
	I0629 20:25:29.466433    4664 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:73 OomKillDisable:true NGoroutines:60 SystemTime:2022-06-29 20:25:28.3531314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:25:29.495163    4664 out.go:177] * Using the docker driver based on user configuration
	I0629 20:25:29.509011    4664 start.go:284] selected driver: docker
	I0629 20:25:29.509011    4664 start.go:808] validating driver "docker" against <nil>
	I0629 20:25:29.509239    4664 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 20:25:29.635405    4664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:25:31.779075    4664 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1436077s)
	I0629 20:25:31.779268    4664 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:73 OomKillDisable:true NGoroutines:60 SystemTime:2022-06-29 20:25:30.7217606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:25:31.779268    4664 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	W0629 20:25:31.779268    4664 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0629 20:25:31.780251    4664 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0629 20:25:31.813003    4664 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 20:25:31.818921    4664 cni.go:95] Creating CNI manager for ""
	I0629 20:25:31.818921    4664 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 20:25:31.818921    4664 start_flags.go:310] config:
	{Name:newest-cni-20220629202523-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629202523-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:25:31.824905    4664 out.go:177] * Starting control plane node newest-cni-20220629202523-2408 in cluster newest-cni-20220629202523-2408
	I0629 20:25:31.827913    4664 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 20:25:31.831905    4664 out.go:177] * Pulling base image ...
	I0629 20:25:31.837897    4664 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:25:31.837897    4664 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 20:25:31.837897    4664 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 20:25:31.837897    4664 cache.go:57] Caching tarball of preloaded images
	I0629 20:25:31.838921    4664 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 20:25:31.838921    4664 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 20:25:31.838921    4664 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-20220629202523-2408\config.json ...
	I0629 20:25:31.838921    4664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-20220629202523-2408\config.json: {Name:mkadb2eec05de48e440a390dbeda47ac3aa7f7e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:25:32.994428    4664 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 20:25:32.994558    4664 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 20:25:32.994558    4664 cache.go:208] Successfully downloaded all kic artifacts
	I0629 20:25:32.994703    4664 start.go:352] acquiring machines lock for newest-cni-20220629202523-2408: {Name:mkdf6a701a9a3f7bb051535992016ae8841b9778 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 20:25:32.994965    4664 start.go:356] acquired machines lock for "newest-cni-20220629202523-2408" in 262.2µs
	I0629 20:25:32.995147    4664 start.go:91] Provisioning new machine with config: &{Name:newest-cni-20220629202523-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629202523-2408 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:25:32.995339    4664 start.go:131] createHost starting for "" (driver="docker")
	I0629 20:25:33.003356    4664 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0629 20:25:33.003356    4664 start.go:165] libmachine.API.Create for "newest-cni-20220629202523-2408" (driver="docker")
	I0629 20:25:33.003356    4664 client.go:168] LocalClient.Create starting
	I0629 20:25:33.004196    4664 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0629 20:25:33.004196    4664 main.go:134] libmachine: Decoding PEM data...
	I0629 20:25:33.004196    4664 main.go:134] libmachine: Parsing certificate...
	I0629 20:25:33.004829    4664 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0629 20:25:33.004829    4664 main.go:134] libmachine: Decoding PEM data...
	I0629 20:25:33.004829    4664 main.go:134] libmachine: Parsing certificate...
	I0629 20:25:33.013113    4664 cli_runner.go:164] Run: docker network inspect newest-cni-20220629202523-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 20:25:34.163681    4664 cli_runner.go:211] docker network inspect newest-cni-20220629202523-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 20:25:34.163681    4664 cli_runner.go:217] Completed: docker network inspect newest-cni-20220629202523-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1505605s)
	I0629 20:25:34.170680    4664 network_create.go:272] running [docker network inspect newest-cni-20220629202523-2408] to gather additional debugging logs...
	I0629 20:25:34.170680    4664 cli_runner.go:164] Run: docker network inspect newest-cni-20220629202523-2408
	W0629 20:25:35.376291    4664 cli_runner.go:211] docker network inspect newest-cni-20220629202523-2408 returned with exit code 1
	I0629 20:25:35.376291    4664 cli_runner.go:217] Completed: docker network inspect newest-cni-20220629202523-2408: (1.2056033s)
	I0629 20:25:35.376291    4664 network_create.go:275] error running [docker network inspect newest-cni-20220629202523-2408]: docker network inspect newest-cni-20220629202523-2408: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220629202523-2408
	I0629 20:25:35.376291    4664 network_create.go:277] output of [docker network inspect newest-cni-20220629202523-2408]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220629202523-2408
	
	** /stderr **
	I0629 20:25:35.387534    4664 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 20:25:36.669234    4664 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2816911s)
	I0629 20:25:36.689232    4664 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006e18] misses:0}
	I0629 20:25:36.690264    4664 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:25:36.690264    4664 network_create.go:115] attempt to create docker network newest-cni-20220629202523-2408 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 20:25:36.697246    4664 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408
	W0629 20:25:37.954627    4664 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408 returned with exit code 1
	I0629 20:25:37.954627    4664 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408: (1.2573735s)
	W0629 20:25:37.954627    4664 network_create.go:107] failed to create docker network newest-cni-20220629202523-2408 192.168.49.0/24, will retry: subnet is taken
	I0629 20:25:37.975627    4664 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006e18] amended:false}} dirty:map[] misses:0}
	I0629 20:25:37.976008    4664 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:25:38.000035    4664 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006e18] amended:true}} dirty:map[192.168.49.0:0xc000006e18 192.168.58.0:0xc000138648] misses:0}
	I0629 20:25:38.000969    4664 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:25:38.000969    4664 network_create.go:115] attempt to create docker network newest-cni-20220629202523-2408 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 20:25:38.019939    4664 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408
	W0629 20:25:39.263469    4664 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408 returned with exit code 1
	I0629 20:25:39.263469    4664 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408: (1.2435221s)
	W0629 20:25:39.263469    4664 network_create.go:107] failed to create docker network newest-cni-20220629202523-2408 192.168.58.0/24, will retry: subnet is taken
	I0629 20:25:39.282460    4664 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006e18] amended:true}} dirty:map[192.168.49.0:0xc000006e18 192.168.58.0:0xc000138648] misses:1}
	I0629 20:25:39.282460    4664 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:25:39.301370    4664 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006e18] amended:true}} dirty:map[192.168.49.0:0xc000006e18 192.168.58.0:0xc000138648 192.168.67.0:0xc000006eb0] misses:1}
	I0629 20:25:39.301370    4664 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:25:39.301370    4664 network_create.go:115] attempt to create docker network newest-cni-20220629202523-2408 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 20:25:39.307431    4664 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408
	W0629 20:25:40.427425    4664 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408 returned with exit code 1
	I0629 20:25:40.427425    4664 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408: (1.1199872s)
	W0629 20:25:40.427425    4664 network_create.go:107] failed to create docker network newest-cni-20220629202523-2408 192.168.67.0/24, will retry: subnet is taken
	I0629 20:25:40.445998    4664 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006e18] amended:true}} dirty:map[192.168.49.0:0xc000006e18 192.168.58.0:0xc000138648 192.168.67.0:0xc000006eb0] misses:2}
	I0629 20:25:40.446501    4664 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:25:40.467171    4664 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006e18] amended:true}} dirty:map[192.168.49.0:0xc000006e18 192.168.58.0:0xc000138648 192.168.67.0:0xc000006eb0 192.168.76.0:0xc0001386e0] misses:2}
	I0629 20:25:40.468160    4664 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:25:40.468160    4664 network_create.go:115] attempt to create docker network newest-cni-20220629202523-2408 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0629 20:25:40.475167    4664 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408
	I0629 20:25:41.759401    4664 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 newest-cni-20220629202523-2408: (1.2841144s)
	I0629 20:25:41.759532    4664 network_create.go:99] docker network newest-cni-20220629202523-2408 192.168.76.0/24 created
	I0629 20:25:41.759738    4664 kic.go:106] calculated static IP "192.168.76.2" for the "newest-cni-20220629202523-2408" container
	I0629 20:25:41.787444    4664 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 20:25:42.936359    4664 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1489079s)
	I0629 20:25:42.942366    4664 cli_runner.go:164] Run: docker volume create newest-cni-20220629202523-2408 --label name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 --label created_by.minikube.sigs.k8s.io=true
	I0629 20:25:44.149225    4664 cli_runner.go:217] Completed: docker volume create newest-cni-20220629202523-2408 --label name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 --label created_by.minikube.sigs.k8s.io=true: (1.2068513s)
	I0629 20:25:44.149460    4664 oci.go:103] Successfully created a docker volume newest-cni-20220629202523-2408
	I0629 20:25:44.157149    4664 cli_runner.go:164] Run: docker run --rm --name newest-cni-20220629202523-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220629202523-2408 --entrypoint /usr/bin/test -v newest-cni-20220629202523-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 20:18:14 UTC, end at Wed 2022-06-29 20:25:57 UTC. --
	Jun 29 20:23:20 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:20.256165200Z" level=info msg="ignoring event" container=012b07bc74ef270870c7cf928c1a7bd9a9db9e5f53889605f3b23e74d31ba7e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:21 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:21.034519800Z" level=info msg="ignoring event" container=c1af6fb3418171b9ca12cbd39c72463c2a7c9cf95101d20eb3a5c762fa9e941a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:21 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:21.871447400Z" level=info msg="ignoring event" container=14d35f4f64cf2616e02d880965bf6220aad0301ae7cc431117f7167e5c312bc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:22 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:22.605555000Z" level=info msg="ignoring event" container=4d29b68ca76d01ad3f5de6e4c4fa9830f5fe15597db3b2d90ec6c4266970ae50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:23 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:23.383457600Z" level=info msg="ignoring event" container=9896eb0b245b0afe73bf633f27e9623c28f8871ca394ab0717fd7850a6f425bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:24 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:24.325003400Z" level=info msg="ignoring event" container=2b66636c4867a8d17f13c52802d45e096dea9e9a1ef7a2de2bc4a378a642aaa3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:24 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:24.934631500Z" level=info msg="ignoring event" container=055be590a520661d1706f02370594078d7469ba48effb7940333ef26979897df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:25 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:25.479887600Z" level=info msg="ignoring event" container=b7cfedd1e9f5cf71ab9045ff0914cf5f39a941751dc380b672870680677912d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:25 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:25.903665300Z" level=info msg="ignoring event" container=6b5c7e704ec1968dbda9dda0b9f5fcc02f5b862b03ac9fc2dff33140aab03e82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.335975300Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.336387000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.630073900Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.724024700Z" level=info msg="ignoring event" container=0e3fc0243d97bb1c54abb7094f10bcbcc284bc1b23fc37a302234e46896d1dcb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:18 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:18.723909600Z" level=info msg="ignoring event" container=38314a917e6d52b148e10e3360a4975b4e151106d7bb98dbbfab75806c6c9786 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:19 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:19.917229100Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 20:24:20 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:20.459397000Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 20:24:40 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:40.322398500Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 20:24:40 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:40.770039500Z" level=info msg="ignoring event" container=972706b53023c656635c4195422fe912938b8061495ed94d7759b883e7d8b886 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:41 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:41.797341900Z" level=info msg="ignoring event" container=fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:25:09 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:09.864045900Z" level=info msg="ignoring event" container=54fb2a5a3d3a9d7a89e27898edd5340e17b88ffd61cc1e5f943a93106b653f61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.565629500Z" level=info msg="ignoring event" container=8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.833427000Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.834229100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.922541900Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:25:39 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:39.580665300Z" level=error msg="Handler for POST /v1.41/containers/580dee3616f7/pause returned error: Cannot pause container 580dee3616f7de3a4036a12852d21224be899c0836b478626275102a8dc652f4: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	10a2f9728da80       6e38f40d628db                                                                                    46 seconds ago       Running             storage-provisioner         1                   e8d81798dfd19
	28e64e74c203e       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   46 seconds ago       Running             kubernetes-dashboard        0                   3ac48d7c98253
	8f46fd284463f       a90209bb39e3d                                                                                    About a minute ago   Exited              dashboard-metrics-scraper   2                   da19ab3513292
	54fb2a5a3d3a9       6e38f40d628db                                                                                    About a minute ago   Exited              storage-provisioner         0                   e8d81798dfd19
	3ced1f4604b4b       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   5fa0db32f182c
	760bd2bff2d8f       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   867c6c31b956f
	eedd9268a51a7       d3377ffb7177c                                                                                    2 minutes ago        Running             kube-apiserver              0                   3caf37f29e67f
	580dee3616f7d       aebe758cef4cd                                                                                    2 minutes ago        Running             etcd                        0                   9fa8d6123d685
	cd6efa75b070a       34cdf99b1bb3b                                                                                    2 minutes ago        Running             kube-controller-manager     0                   675ceed3bcb2e
	e33ea872250bc       5d725196c1f47                                                                                    2 minutes ago        Running             kube-scheduler              0                   c500fb2ec9ed0
	
	* 
	* ==> coredns [3ced1f4604b4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [ +20.334456] process 'docker/tmp/qemu-check913677031/check' started with executable stack
	[Jun29 20:01] WSL2: Performing memory compaction.
	[Jun29 20:06] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000015] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000100] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000027] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +21.104341] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.081691] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000052] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:07] WSL2: Performing memory compaction.
	[Jun29 20:08] WSL2: Performing memory compaction.
	[Jun29 20:09] WSL2: Performing memory compaction.
	[Jun29 20:11] WSL2: Performing memory compaction.
	[Jun29 20:12] WSL2: Performing memory compaction.
	[Jun29 20:14] WSL2: Performing memory compaction.
	[Jun29 20:15] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:16] WSL2: Performing memory compaction.
	[Jun29 20:25] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [580dee3616f7] <==
	* {"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:28.633Z","time spent":"12.4315669s","remote":"127.0.0.1:58648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":28,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"13.4825379s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"13.4186559s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1148"}
	{"level":"info","ts":"2022-06-29T20:25:41.065Z","caller":"traceutil/trace.go:171","msg":"trace[861525364] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:613; }","duration":"13.4187037s","start":"2022-06-29T20:25:27.646Z","end":"2022-06-29T20:25:41.065Z","steps":["trace[861525364] 'agreement among raft nodes before linearized reading'  (duration: 13.4186077s)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:25:41.065Z","caller":"traceutil/trace.go:171","msg":"trace[703802371] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:613; }","duration":"13.482629s","start":"2022-06-29T20:25:27.582Z","end":"2022-06-29T20:25:41.065Z","steps":["trace[703802371] 'agreement among raft nodes before linearized reading'  (duration: 13.4824942s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:27.646Z","time spent":"13.4187602s","remote":"127.0.0.1:58562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1171,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.2623219s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:25:41.065Z","caller":"traceutil/trace.go:171","msg":"trace[1603285542] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:613; }","duration":"6.2624204s","start":"2022-06-29T20:25:34.803Z","end":"2022-06-29T20:25:41.065Z","steps":["trace[1603285542] 'agreement among raft nodes before linearized reading'  (duration: 6.2623084s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:27.582Z","time spent":"13.4826858s","remote":"127.0.0.1:58610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":28,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:34.803Z","time spent":"6.2625251s","remote":"127.0.0.1:58596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":28,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.338067s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[793042799] range","detail":"{range_begin:/registry/podsecuritypolicy/; range_end:/registry/podsecuritypolicy0; response_count:0; response_revision:613; }","duration":"2.3381083s","start":"2022-06-29T20:25:38.727Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[793042799] 'agreement among raft nodes before linearized reading'  (duration: 2.3380378s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:38.727Z","time spent":"2.338162s","remote":"127.0.0.1:58626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":28,"request content":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.8356947s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.8479698s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1512434769] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:613; }","duration":"1.8357704s","start":"2022-06-29T20:25:39.230Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1512434769] 'agreement among raft nodes before linearized reading'  (duration: 1.8356359s)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1122594331] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:613; }","duration":"7.8482342s","start":"2022-06-29T20:25:33.217Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1122594331] 'agreement among raft nodes before linearized reading'  (duration: 7.847914s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:39.230Z","time spent":"1.8358376s","remote":"127.0.0.1:58612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":0,"response size":28,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:33.217Z","time spent":"7.8482998s","remote":"127.0.0.1:58568","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":42,"response size":30,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.0463182s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.1022529s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1389062586] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:613; }","duration":"3.1023225s","start":"2022-06-29T20:25:37.963Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1389062586] 'agreement among raft nodes before linearized reading'  (duration: 3.1022556s)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1467184206] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:613; }","duration":"6.0464425s","start":"2022-06-29T20:25:35.019Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1467184206] 'agreement among raft nodes before linearized reading'  (duration: 6.0462855s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:37.963Z","time spent":"3.1023772s","remote":"127.0.0.1:58546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":28,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:35.019Z","time spent":"6.0465074s","remote":"127.0.0.1:58646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":30,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	
	* 
	* ==> kernel <==
	*  20:26:07 up  2:33,  0 users,  load average: 6.53, 7.57, 6.29
	Linux default-k8s-different-port-20220629201430-2408 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [eedd9268a51a] <==
	* E0629 20:25:38.746196       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0629 20:25:38.746268       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	E0629 20:25:38.746427       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	{"level":"warn","ts":"2022-06-29T20:25:38.746Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001e8c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	I0629 20:25:38.746448       1 trace.go:205] Trace[157811910]: "Patch" url:/api/v1/namespaces/kube-system/events/metrics-server-5c6f97fb75-9llbc.16fd3121f4881d7c,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:a5df021d-c245-415a-b117-7337786914c4,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:25:34.092) (total time: 4653ms):
	Trace[157811910]: [4.6537841s] [4.6537841s] END
	I0629 20:25:38.746574       1 trace.go:205] Trace[1186453289]: "GuaranteedUpdate etcd3" type:*core.Event (29-Jun-2022 20:25:34.092) (total time: 4653ms):
	Trace[1186453289]: [4.6536581s] [4.6536581s] END
	{"level":"warn","ts":"2022-06-29T20:25:38.746Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000befdc0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0629 20:25:38.746631       1 wrap.go:53] timeout or abort while handling: method=PATCH URI="/api/v1/namespaces/kube-system/events/metrics-server-5c6f97fb75-9llbc.16fd3121f4881d7c" audit-ID="a5df021d-c245-415a-b117-7337786914c4"
	I0629 20:25:38.746636       1 trace.go:205] Trace[1078821673]: "GuaranteedUpdate etcd3" type:*coordination.Lease (29-Jun-2022 20:25:36.906) (total time: 1840ms):
	Trace[1078821673]: [1.840515s] [1.840515s] END
	E0629 20:25:38.746685       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 217µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0629 20:25:38.746821       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 14.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0629 20:25:38.748259       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0629 20:25:38.749674       1 timeout.go:141] post-timeout activity - time-elapsed: 3.005ms, PATCH "/api/v1/namespaces/kube-system/events/metrics-server-5c6f97fb75-9llbc.16fd3121f4881d7c" result: <nil>
	E0629 20:25:38.749714       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0629 20:25:38.750993       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0629 20:25:38.760064       1 trace.go:205] Trace[233392963]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-default-k8s-different-port-20220629201430-2408,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:e385e2d6-d19d-4992-b5c7-3f2abddd6c8c,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:25:33.430) (total time: 5329ms):
	Trace[233392963]: [5.3291501s] [5.3291501s] END
	E0629 20:25:38.760808       1 timeout.go:141] post-timeout activity - time-elapsed: 14.7673ms, GET "/api/v1/namespaces/kube-system/pods/kube-apiserver-default-k8s-different-port-20220629201430-2408" result: <nil>
	E0629 20:25:38.761513       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0629 20:25:38.762896       1 trace.go:205] Trace[573860594]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20220629201430-2408,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:f1240996-6d4e-482a-83f2-6b1fe7d06c19,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:25:36.905) (total time: 1856ms):
	Trace[573860594]: [1.8569964s] [1.8569964s] END
	E0629 20:25:38.764789       1 timeout.go:141] post-timeout activity - time-elapsed: 18.5399ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20220629201430-2408" result: <nil>
	
	* 
	* ==> kube-controller-manager [cd6efa75b070] <==
	* I0629 20:24:14.346860       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.446690       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:24:14.447483       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.447642       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.447679       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.527661       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.527734       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.527772       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.527679       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:24:14.620096       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:24:14.620123       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.620157       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.620741       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.632570       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.632755       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.632793       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.632859       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.747545       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-shsp8"
	I0629 20:24:14.820392       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-jlpln"
	E0629 20:24:33.224171       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:24:33.620608       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 20:25:03.329669       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:25:03.734116       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 20:25:33.421632       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:25:33.768015       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [760bd2bff2d8] <==
	* I0629 20:24:08.725660       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 20:24:08.731835       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 20:24:08.816891       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 20:24:08.821289       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 20:24:08.825926       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 20:24:09.033346       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0629 20:24:09.033592       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0629 20:24:09.033875       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 20:24:09.722619       1 server_others.go:206] "Using iptables Proxier"
	I0629 20:24:09.722760       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 20:24:09.722857       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 20:24:09.722895       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 20:24:09.722946       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:24:09.724066       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:24:09.724415       1 server.go:661] "Version info" version="v1.24.2"
	I0629 20:24:09.724435       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:24:09.725890       1 config.go:317] "Starting service config controller"
	I0629 20:24:09.726485       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 20:24:09.726611       1 config.go:226] "Starting endpoint slice config controller"
	I0629 20:24:09.726624       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 20:24:09.728092       1 config.go:444] "Starting node config controller"
	I0629 20:24:09.728113       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 20:24:09.827557       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 20:24:09.827804       1 shared_informer.go:262] Caches are synced for service config
	I0629 20:24:09.828232       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e33ea872250b] <==
	* W0629 20:23:45.484980       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 20:23:45.485150       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 20:23:45.530631       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0629 20:23:45.530683       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0629 20:23:45.569203       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 20:23:45.569372       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 20:23:45.619835       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 20:23:45.620004       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 20:23:45.628730       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.628869       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.678153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.678324       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.693526       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 20:23:45.693673       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 20:23:45.787090       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0629 20:23:45.787241       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 20:23:45.831772       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.831824       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.847710       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.847847       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.919627       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 20:23:45.919709       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 20:23:46.023522       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 20:23:46.023575       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0629 20:23:48.338649       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 20:18:14 UTC, end at Wed 2022-06-29 20:26:08 UTC. --
	Jun 29 20:24:44 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:24:44.217009    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:24:44 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:24:44.217757    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:24:45 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:24:45.583740    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:24:45 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:24:45.584559    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:24:57 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:24:57.041849    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:25:07 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:07.231876    7688 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-default-k8s-different-port-20220629201430-2408.16fd312b55520178", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-default-k8s-different-port-20220629201430-2408", UID:"c3fb55440ccabb3f3dc30995c3d6f8e8", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserv
er}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"default-k8s-different-port-20220629201430-2408"}, FirstTimestamp:time.Date(2022, time.June, 29, 20, 25, 0, 127003000, time.Local), LastTimestamp:time.Date(2022, time.June, 29, 20, 25, 0, 127003000, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:11.756489    7688 scope.go:110] "RemoveContainer" containerID="54fb2a5a3d3a9d7a89e27898edd5340e17b88ffd61cc1e5f943a93106b653f61"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:11.839267    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:11.840084    7688 scope.go:110] "RemoveContainer" containerID="8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.840788    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.923852    7688 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.924012    7688 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.924324    7688 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vrqrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Probe
Handler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File
,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-9llbc_kube-system(a3bd48e4-1f00-4d3d-97ee-71c3d5984f20): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.924612    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-9llbc" podUID=a3bd48e4-1f00-4d3d-97ee-71c3d5984f20
	Jun 29 20:25:15 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:15.580669    7688 scope.go:110] "RemoveContainer" containerID="8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b"
	Jun 29 20:25:15 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:15.581248    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:25:27 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:27.042636    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-5c6f97fb75-9llbc" podUID=a3bd48e4-1f00-4d3d-97ee-71c3d5984f20
	Jun 29 20:25:30 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:30.037390    7688 scope.go:110] "RemoveContainer" containerID="8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b"
	Jun 29 20:25:30 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:30.038004    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:25:34 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:34.087480    7688 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server-5c6f97fb75-9llbc.16fd3121f487c0e8", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"520", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"metrics-server-5c6f97fb75-9llbc", UID:"a3bd48e4-1f00-4d3d-97ee-71c3d5984f20", APIVersion:"v1", ResourceVersion:"403", FieldPath:"spec.containers{metrics-server}"}, Reason:"BackOff", Message:"Back-off pulling
image \"fake.domain/k8s.gcr.io/echoserver:1.4\"", Source:v1.EventSource{Component:"kubelet", Host:"default-k8s-different-port-20220629201430-2408"}, FirstTimestamp:time.Date(2022, time.June, 29, 20, 24, 19, 0, time.Local), LastTimestamp:time.Date(2022, time.June, 29, 20, 25, 27, 42500600, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	Jun 29 20:25:36 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:36.900721    7688 controller.go:187] failed to update lease, error: etcdserver: request timed out
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:38.646230    7688 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 systemd[1]: kubelet.service: Succeeded.
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [28e64e74c203] <==
	* 2022/06/29 20:25:12 Starting overwatch
	2022/06/29 20:25:12 Using namespace: kubernetes-dashboard
	2022/06/29 20:25:12 Using in-cluster config to connect to apiserver
	2022/06/29 20:25:12 Using secret token for csrf signing
	2022/06/29 20:25:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 20:25:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 20:25:13 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 20:25:13 Generating JWE encryption key
	2022/06/29 20:25:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 20:25:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 20:25:13 Initializing JWE encryption key from synchronized object
	2022/06/29 20:25:13 Creating in-cluster Sidecar client
	2022/06/29 20:25:13 Serving insecurely on HTTP port: 9090
	2022/06/29 20:25:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [10a2f9728da8] <==
	* I0629 20:25:12.759949       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 20:25:12.852380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 20:25:12.852554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [54fb2a5a3d3a] <==
	* k8s.io/client-go/util/workqueue.(*Type).Get(0xc0006159e0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00003ca00, 0x18e5530, 0xc000592080, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0000c6180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0000c6180, 0x18b3d60, 0xc0000be270, 0x1, 0xc00058c120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000c6180, 0x3b9aca00, 0x0, 0x1, 0xc00058c120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0000c6180, 0x3b9aca00, 0xc00058c120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 181 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc00045b840, 0xc00003c280)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	I0629 20:25:09.825337       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6bfa885-666d-4924-bcca-a35b45ce9842", APIVersion:"v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220629201430-2408_f619cd47-f3e6-4b82-a949-36583e88e085 stopped leading
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 20:26:07.777744    4136 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408: exit status 2 (7.4200054s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "default-k8s-different-port-20220629201430-2408" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220629201430-2408
helpers_test.go:231: (dbg) Done: docker inspect default-k8s-different-port-20220629201430-2408: (1.157118s)
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220629201430-2408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35",
	        "Created": "2022-06-29T20:15:30.1215786Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T20:18:14.1128585Z",
	            "FinishedAt": "2022-06-29T20:17:51.0691308Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/hostname",
	        "HostsPath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/hosts",
	        "LogPath": "/var/lib/docker/containers/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35/07c14fc792435494dd480bd6997cd4a737fa1221606fb53328a49dc4a2af2b35-json.log",
	        "Name": "/default-k8s-different-port-20220629201430-2408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220629201430-2408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220629201430-2408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac2a94c8108e5f894523f0890eda6b4237e7afb189b0a40ba4aefc427a709727/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220629201430-2408",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220629201430-2408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220629201430-2408",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220629201430-2408",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220629201430-2408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "47364ffaf80da4e8be1fb2d0a2d4d4d433d03e56b665a75a1638e08a29c503ef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57170"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57171"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/47364ffaf80d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220629201430-2408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "07c14fc79243",
	                        "default-k8s-different-port-20220629201430-2408"
	                    ],
	                    "NetworkID": "d88bb354aff4ce91db16a4dcfedbd3ffa4db13d0bb2d32411fdb12923981dd82",
	                    "EndpointID": "71cfec67be98c9ad7e298190be5f0257a3b42161420b711894064d4b376f4b99",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408: exit status 2 (7.2457033s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-different-port-20220629201430-2408 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-different-port-20220629201430-2408 logs -n 25: (18.2014022s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| start   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:16 GMT | 29 Jun 22 20:23 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |                   |         |                     |                     |
	|         | --wait=true --embed-certs                                  |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |                   |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:17 GMT | 29 Jun 22 20:17 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |                   |         |                     |                     |
	| stop    | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:17 GMT | 29 Jun 22 20:17 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:17 GMT | 29 Jun 22 20:18 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |                   |         |                     |                     |
	| start   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:18 GMT | 29 Jun 22 20:24 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |                   |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:23 GMT | 29 Jun 22 20:23 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:23 GMT | 29 Jun 22 20:23 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| unpause | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| unpause | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:24 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:25 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	| unpause | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:24 GMT | 29 Jun 22 20:25 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	| start   | -p newest-cni-20220629202523-2408 --memory=2200            | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.24.2               |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	| start   | -p auto-20220629200908-2408                                | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr                                          |          |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:26 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	| start   | -p kindnet-20220629200924-2408                             | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:26 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr                                          |          |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |          |                   |         |                     |                     |
	|         | --cni=kindnet --driver=docker                              |          |                   |         |                     |                     |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* W0629 20:26:02.830504   10512 cli_runner.go:211] docker network inspect auto-20220629200908-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 20:26:02.830504   10512 cli_runner.go:217] Completed: docker network inspect auto-20220629200908-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1605607s)
	I0629 20:26:02.841115   10512 network_create.go:272] running [docker network inspect auto-20220629200908-2408] to gather additional debugging logs...
	I0629 20:26:02.841115   10512 cli_runner.go:164] Run: docker network inspect auto-20220629200908-2408
	W0629 20:26:03.955301   10512 cli_runner.go:211] docker network inspect auto-20220629200908-2408 returned with exit code 1
	I0629 20:26:03.955301   10512 cli_runner.go:217] Completed: docker network inspect auto-20220629200908-2408: (1.1141789s)
	I0629 20:26:03.955301   10512 network_create.go:275] error running [docker network inspect auto-20220629200908-2408]: docker network inspect auto-20220629200908-2408: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220629200908-2408
	I0629 20:26:03.955301   10512 network_create.go:277] output of [docker network inspect auto-20220629200908-2408]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220629200908-2408
	
	** /stderr **
	I0629 20:26:03.967010   10512 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 20:26:05.053037   10512 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0860197s)
	I0629 20:26:05.075032   10512 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003fc080] misses:0}
	I0629 20:26:05.075032   10512 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:26:05.075032   10512 network_create.go:115] attempt to create docker network auto-20220629200908-2408 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 20:26:05.082029   10512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408
	W0629 20:26:06.180658   10512 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408 returned with exit code 1
	I0629 20:26:06.180658   10512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408: (1.0976228s)
	W0629 20:26:06.180658   10512 network_create.go:107] failed to create docker network auto-20220629200908-2408 192.168.49.0/24, will retry: subnet is taken
	I0629 20:26:06.209387   10512 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003fc080] amended:false}} dirty:map[] misses:0}
	I0629 20:26:06.209387   10512 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:26:06.231377   10512 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003fc080] amended:true}} dirty:map[192.168.49.0:0xc0003fc080 192.168.58.0:0xc00041e280] misses:0}
	I0629 20:26:06.231377   10512 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:26:06.231377   10512 network_create.go:115] attempt to create docker network auto-20220629200908-2408 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 20:26:06.239374   10512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408
	W0629 20:26:07.364084   10512 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408 returned with exit code 1
	I0629 20:26:07.364084   10512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408: (1.1247029s)
	W0629 20:26:07.364084   10512 network_create.go:107] failed to create docker network auto-20220629200908-2408 192.168.58.0/24, will retry: subnet is taken
	I0629 20:26:07.383084   10512 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003fc080] amended:true}} dirty:map[192.168.49.0:0xc0003fc080 192.168.58.0:0xc00041e280] misses:1}
	I0629 20:26:07.383084   10512 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:26:07.402086   10512 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003fc080] amended:true}} dirty:map[192.168.49.0:0xc0003fc080 192.168.58.0:0xc00041e280 192.168.67.0:0xc000a18950] misses:1}
	I0629 20:26:07.402086   10512 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:26:07.402086   10512 network_create.go:115] attempt to create docker network auto-20220629200908-2408 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 20:26:07.412079   10512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408
	Log file created at: 2022/06/29 20:26:07
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 20:26:07.633935   10596 out.go:296] Setting OutFile to fd 1960 ...
	I0629 20:26:07.697939   10596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:26:07.697939   10596 out.go:309] Setting ErrFile to fd 1872...
	I0629 20:26:07.697939   10596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:26:07.719138   10596 out.go:303] Setting JSON to false
	I0629 20:26:07.721818   10596 start.go:115] hostinfo: {"hostname":"minikube8","uptime":26930,"bootTime":1656507437,"procs":161,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 20:26:07.722373   10596 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 20:26:07.728430   10596 out.go:177] * [kindnet-20220629200924-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 20:26:07.731614   10596 notify.go:193] Checking for updates...
	I0629 20:26:07.733676   10596 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:26:07.736677   10596 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 20:26:07.738686   10596 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 20:26:07.741679   10596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 20:26:08.644812   10512 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-20220629200908-2408 auto-20220629200908-2408: (1.2327245s)
	I0629 20:26:08.644901   10512 network_create.go:99] docker network auto-20220629200908-2408 192.168.67.0/24 created
	I0629 20:26:08.644901   10512 kic.go:106] calculated static IP "192.168.67.2" for the "auto-20220629200908-2408" container
	I0629 20:26:08.657675   10512 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 20:26:09.765266   10512 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1074797s)
	I0629 20:26:09.772892   10512 cli_runner.go:164] Run: docker volume create auto-20220629200908-2408 --label name.minikube.sigs.k8s.io=auto-20220629200908-2408 --label created_by.minikube.sigs.k8s.io=true
	I0629 20:26:10.910029   10512 cli_runner.go:217] Completed: docker volume create auto-20220629200908-2408 --label name.minikube.sigs.k8s.io=auto-20220629200908-2408 --label created_by.minikube.sigs.k8s.io=true: (1.1371294s)
	I0629 20:26:10.910029   10512 oci.go:103] Successfully created a docker volume auto-20220629200908-2408
	I0629 20:26:10.917037   10512 cli_runner.go:164] Run: docker run --rm --name auto-20220629200908-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220629200908-2408 --entrypoint /usr/bin/test -v auto-20220629200908-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 20:26:07.746686   10596 config.go:178] Loaded profile config "auto-20220629200908-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:26:07.747415   10596 config.go:178] Loaded profile config "default-k8s-different-port-20220629201430-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:26:07.747699   10596 config.go:178] Loaded profile config "newest-cni-20220629202523-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:26:07.747699   10596 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 20:26:11.039838   10596 docker.go:137] docker version: linux-20.10.16
	I0629 20:26:11.045853   10596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:26:13.269050   10596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2231827s)
	I0629 20:26:13.730563   10596 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:68 OomKillDisable:true NGoroutines:67 SystemTime:2022-06-29 20:26:12.1532336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:26:14.668637   10596 out.go:177] * Using the docker driver based on user configuration
	I0629 20:26:14.842554   10596 start.go:284] selected driver: docker
	I0629 20:26:14.842666   10596 start.go:808] validating driver "docker" against <nil>
	I0629 20:26:14.842775   10596 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 20:26:14.912562   10596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:26:17.028120   10596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1155444s)
	I0629 20:26:17.028120   10596 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:69 SystemTime:2022-06-29 20:26:15.9910923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:26:17.028120   10596 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 20:26:17.029116   10596 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 20:26:17.177423   10596 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 20:26:17.181833   10596 cni.go:95] Creating CNI manager for "kindnet"
	I0629 20:26:17.181957   10596 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0629 20:26:17.182037   10596 start_flags.go:310] config:
	{Name:kindnet-20220629200924-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kindnet-20220629200924-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:26:17.185134   10596 out.go:177] * Starting control plane node kindnet-20220629200924-2408 in cluster kindnet-20220629200924-2408
	I0629 20:26:17.189379   10596 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 20:26:17.191519   10596 out.go:177] * Pulling base image ...
	I0629 20:26:17.217679   10596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 20:26:17.217679   10596 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:26:17.218179   10596 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 20:26:17.218179   10596 cache.go:57] Caching tarball of preloaded images
	I0629 20:26:17.218179   10596 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 20:26:17.218808   10596 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 20:26:17.218876   10596 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\config.json ...
	I0629 20:26:17.218876   10596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\config.json: {Name:mk59982f11dbc1bfba6fd940f48236ef21d6fac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:26:18.357269   10596 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 20:26:18.485555   10596 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 20:26:18.486460   10596 cache.go:208] Successfully downloaded all kic artifacts
	I0629 20:26:18.486569   10596 start.go:352] acquiring machines lock for kindnet-20220629200924-2408: {Name:mk298f322a972f8cde5f53af595dfcb452238005 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 20:26:18.486824   10596 start.go:356] acquired machines lock for "kindnet-20220629200924-2408" in 188.6µs
	I0629 20:26:18.487059   10596 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220629200924-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kindnet-20220629200924-2408 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:26:18.487143   10596 start.go:131] createHost starting for "" (driver="docker")
	I0629 20:26:18.875527   10596 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0629 20:26:18.876606   10596 start.go:165] libmachine.API.Create for "kindnet-20220629200924-2408" (driver="docker")
	I0629 20:26:18.876606   10596 client.go:168] LocalClient.Create starting
	I0629 20:26:18.877501   10596 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0629 20:26:18.877721   10596 main.go:134] libmachine: Decoding PEM data...
	I0629 20:26:18.877721   10596 main.go:134] libmachine: Parsing certificate...
	I0629 20:26:18.877898   10596 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0629 20:26:18.878048   10596 main.go:134] libmachine: Decoding PEM data...
	I0629 20:26:18.878048   10596 main.go:134] libmachine: Parsing certificate...
	I0629 20:26:18.894508   10596 cli_runner.go:164] Run: docker network inspect kindnet-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 20:26:20.058968   10596 cli_runner.go:211] docker network inspect kindnet-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 20:26:20.058968   10596 cli_runner.go:217] Completed: docker network inspect kindnet-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1644532s)
	I0629 20:26:20.067712   10596 network_create.go:272] running [docker network inspect kindnet-20220629200924-2408] to gather additional debugging logs...
	I0629 20:26:20.067712   10596 cli_runner.go:164] Run: docker network inspect kindnet-20220629200924-2408
	W0629 20:26:21.219440   10596 cli_runner.go:211] docker network inspect kindnet-20220629200924-2408 returned with exit code 1
	I0629 20:26:21.219440   10596 cli_runner.go:217] Completed: docker network inspect kindnet-20220629200924-2408: (1.1517201s)
	I0629 20:26:21.219440   10596 network_create.go:275] error running [docker network inspect kindnet-20220629200924-2408]: docker network inspect kindnet-20220629200924-2408: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220629200924-2408
	I0629 20:26:21.219440   10596 network_create.go:277] output of [docker network inspect kindnet-20220629200924-2408]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220629200924-2408
	
	** /stderr **
	I0629 20:26:21.228444   10596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 20:26:22.435474   10596 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2068899s)
	I0629 20:26:22.470419   10596 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00065e808] misses:0}
	I0629 20:26:22.470419   10596 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:26:22.470419   10596 network_create.go:115] attempt to create docker network kindnet-20220629200924-2408 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 20:26:22.476790   10596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-20220629200924-2408 kindnet-20220629200924-2408
	I0629 20:26:23.013646    4664 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220629202523-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (24.9907316s)
	I0629 20:26:23.013646    4664 kic.go:188] duration metric: took 24.996733 seconds to extract preloaded images to volume
	I0629 20:26:23.021187    4664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 20:18:14 UTC, end at Wed 2022-06-29 20:26:31 UTC. --
	Jun 29 20:23:20 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:20.256165200Z" level=info msg="ignoring event" container=012b07bc74ef270870c7cf928c1a7bd9a9db9e5f53889605f3b23e74d31ba7e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:21 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:21.034519800Z" level=info msg="ignoring event" container=c1af6fb3418171b9ca12cbd39c72463c2a7c9cf95101d20eb3a5c762fa9e941a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:21 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:21.871447400Z" level=info msg="ignoring event" container=14d35f4f64cf2616e02d880965bf6220aad0301ae7cc431117f7167e5c312bc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:22 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:22.605555000Z" level=info msg="ignoring event" container=4d29b68ca76d01ad3f5de6e4c4fa9830f5fe15597db3b2d90ec6c4266970ae50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:23 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:23.383457600Z" level=info msg="ignoring event" container=9896eb0b245b0afe73bf633f27e9623c28f8871ca394ab0717fd7850a6f425bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:24 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:24.325003400Z" level=info msg="ignoring event" container=2b66636c4867a8d17f13c52802d45e096dea9e9a1ef7a2de2bc4a378a642aaa3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:24 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:24.934631500Z" level=info msg="ignoring event" container=055be590a520661d1706f02370594078d7469ba48effb7940333ef26979897df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:25 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:25.479887600Z" level=info msg="ignoring event" container=b7cfedd1e9f5cf71ab9045ff0914cf5f39a941751dc380b672870680677912d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:23:25 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:23:25.903665300Z" level=info msg="ignoring event" container=6b5c7e704ec1968dbda9dda0b9f5fcc02f5b862b03ac9fc2dff33140aab03e82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.335975300Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.336387000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.630073900Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:24:17 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:17.724024700Z" level=info msg="ignoring event" container=0e3fc0243d97bb1c54abb7094f10bcbcc284bc1b23fc37a302234e46896d1dcb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:18 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:18.723909600Z" level=info msg="ignoring event" container=38314a917e6d52b148e10e3360a4975b4e151106d7bb98dbbfab75806c6c9786 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:19 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:19.917229100Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 20:24:20 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:20.459397000Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 20:24:40 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:40.322398500Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 20:24:40 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:40.770039500Z" level=info msg="ignoring event" container=972706b53023c656635c4195422fe912938b8061495ed94d7759b883e7d8b886 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:24:41 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:24:41.797341900Z" level=info msg="ignoring event" container=fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:25:09 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:09.864045900Z" level=info msg="ignoring event" container=54fb2a5a3d3a9d7a89e27898edd5340e17b88ffd61cc1e5f943a93106b653f61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.565629500Z" level=info msg="ignoring event" container=8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.833427000Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.834229100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:11.922541900Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 20:25:39 default-k8s-different-port-20220629201430-2408 dockerd[573]: time="2022-06-29T20:25:39.580665300Z" level=error msg="Handler for POST /v1.41/containers/580dee3616f7/pause returned error: Cannot pause container 580dee3616f7de3a4036a12852d21224be899c0836b478626275102a8dc652f4: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	10a2f9728da80       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         1                   e8d81798dfd19
	28e64e74c203e       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   About a minute ago   Running             kubernetes-dashboard        0                   3ac48d7c98253
	8f46fd284463f       a90209bb39e3d                                                                                    About a minute ago   Exited              dashboard-metrics-scraper   2                   da19ab3513292
	54fb2a5a3d3a9       6e38f40d628db                                                                                    2 minutes ago        Exited              storage-provisioner         0                   e8d81798dfd19
	3ced1f4604b4b       a4ca41631cc7a                                                                                    2 minutes ago        Running             coredns                     0                   5fa0db32f182c
	760bd2bff2d8f       a634548d10b03                                                                                    2 minutes ago        Running             kube-proxy                  0                   867c6c31b956f
	eedd9268a51a7       d3377ffb7177c                                                                                    2 minutes ago        Running             kube-apiserver              0                   3caf37f29e67f
	580dee3616f7d       aebe758cef4cd                                                                                    2 minutes ago        Running             etcd                        0                   9fa8d6123d685
	cd6efa75b070a       34cdf99b1bb3b                                                                                    2 minutes ago        Running             kube-controller-manager     0                   675ceed3bcb2e
	e33ea872250bc       5d725196c1f47                                                                                    2 minutes ago        Running             kube-scheduler              0                   c500fb2ec9ed0
	
	* 
	* ==> coredns [3ced1f4604b4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [ +20.334456] process 'docker/tmp/qemu-check913677031/check' started with executable stack
	[Jun29 20:01] WSL2: Performing memory compaction.
	[Jun29 20:06] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000015] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000100] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000027] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +21.104341] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.081691] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000052] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:07] WSL2: Performing memory compaction.
	[Jun29 20:08] WSL2: Performing memory compaction.
	[Jun29 20:09] WSL2: Performing memory compaction.
	[Jun29 20:11] WSL2: Performing memory compaction.
	[Jun29 20:12] WSL2: Performing memory compaction.
	[Jun29 20:14] WSL2: Performing memory compaction.
	[Jun29 20:15] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:16] WSL2: Performing memory compaction.
	[Jun29 20:25] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [580dee3616f7] <==
	* {"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:28.633Z","time spent":"12.4315669s","remote":"127.0.0.1:58648","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":28,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"13.4825379s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"13.4186559s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1148"}
	{"level":"info","ts":"2022-06-29T20:25:41.065Z","caller":"traceutil/trace.go:171","msg":"trace[861525364] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:613; }","duration":"13.4187037s","start":"2022-06-29T20:25:27.646Z","end":"2022-06-29T20:25:41.065Z","steps":["trace[861525364] 'agreement among raft nodes before linearized reading'  (duration: 13.4186077s)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:25:41.065Z","caller":"traceutil/trace.go:171","msg":"trace[703802371] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:613; }","duration":"13.482629s","start":"2022-06-29T20:25:27.582Z","end":"2022-06-29T20:25:41.065Z","steps":["trace[703802371] 'agreement among raft nodes before linearized reading'  (duration: 13.4824942s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:27.646Z","time spent":"13.4187602s","remote":"127.0.0.1:58562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1171,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.2623219s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:25:41.065Z","caller":"traceutil/trace.go:171","msg":"trace[1603285542] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:613; }","duration":"6.2624204s","start":"2022-06-29T20:25:34.803Z","end":"2022-06-29T20:25:41.065Z","steps":["trace[1603285542] 'agreement among raft nodes before linearized reading'  (duration: 6.2623084s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:27.582Z","time spent":"13.4826858s","remote":"127.0.0.1:58610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":28,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:34.803Z","time spent":"6.2625251s","remote":"127.0.0.1:58596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":28,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.338067s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[793042799] range","detail":"{range_begin:/registry/podsecuritypolicy/; range_end:/registry/podsecuritypolicy0; response_count:0; response_revision:613; }","duration":"2.3381083s","start":"2022-06-29T20:25:38.727Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[793042799] 'agreement among raft nodes before linearized reading'  (duration: 2.3380378s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:38.727Z","time spent":"2.338162s","remote":"127.0.0.1:58626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":28,"request content":"key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.8356947s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:25:41.065Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.8479698s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1512434769] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:613; }","duration":"1.8357704s","start":"2022-06-29T20:25:39.230Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1512434769] 'agreement among raft nodes before linearized reading'  (duration: 1.8356359s)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1122594331] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:613; }","duration":"7.8482342s","start":"2022-06-29T20:25:33.217Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1122594331] 'agreement among raft nodes before linearized reading'  (duration: 7.847914s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:39.230Z","time spent":"1.8358376s","remote":"127.0.0.1:58612","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":0,"response size":28,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:33.217Z","time spent":"7.8482998s","remote":"127.0.0.1:58568","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":42,"response size":30,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.0463182s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.1022529s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1389062586] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:613; }","duration":"3.1023225s","start":"2022-06-29T20:25:37.963Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1389062586] 'agreement among raft nodes before linearized reading'  (duration: 3.1022556s)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:25:41.066Z","caller":"traceutil/trace.go:171","msg":"trace[1467184206] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:613; }","duration":"6.0464425s","start":"2022-06-29T20:25:35.019Z","end":"2022-06-29T20:25:41.066Z","steps":["trace[1467184206] 'agreement among raft nodes before linearized reading'  (duration: 6.0462855s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:37.963Z","time spent":"3.1023772s","remote":"127.0.0.1:58546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":28,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T20:25:41.066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:25:35.019Z","time spent":"6.0465074s","remote":"127.0.0.1:58646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":30,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	
	* 
	* ==> kernel <==
	*  20:26:42 up  2:34,  0 users,  load average: 5.14, 7.14, 6.19
	Linux default-k8s-different-port-20220629201430-2408 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [eedd9268a51a] <==
	* E0629 20:25:38.746196       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0629 20:25:38.746268       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	E0629 20:25:38.746427       1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
	{"level":"warn","ts":"2022-06-29T20:25:38.746Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001e8c000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	I0629 20:25:38.746448       1 trace.go:205] Trace[157811910]: "Patch" url:/api/v1/namespaces/kube-system/events/metrics-server-5c6f97fb75-9llbc.16fd3121f4881d7c,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:a5df021d-c245-415a-b117-7337786914c4,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:25:34.092) (total time: 4653ms):
	Trace[157811910]: [4.6537841s] [4.6537841s] END
	I0629 20:25:38.746574       1 trace.go:205] Trace[1186453289]: "GuaranteedUpdate etcd3" type:*core.Event (29-Jun-2022 20:25:34.092) (total time: 4653ms):
	Trace[1186453289]: [4.6536581s] [4.6536581s] END
	{"level":"warn","ts":"2022-06-29T20:25:38.746Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000befdc0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0629 20:25:38.746631       1 wrap.go:53] timeout or abort while handling: method=PATCH URI="/api/v1/namespaces/kube-system/events/metrics-server-5c6f97fb75-9llbc.16fd3121f4881d7c" audit-ID="a5df021d-c245-415a-b117-7337786914c4"
	I0629 20:25:38.746636       1 trace.go:205] Trace[1078821673]: "GuaranteedUpdate etcd3" type:*coordination.Lease (29-Jun-2022 20:25:36.906) (total time: 1840ms):
	Trace[1078821673]: [1.840515s] [1.840515s] END
	E0629 20:25:38.746685       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 217µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0629 20:25:38.746821       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 14.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0629 20:25:38.748259       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0629 20:25:38.749674       1 timeout.go:141] post-timeout activity - time-elapsed: 3.005ms, PATCH "/api/v1/namespaces/kube-system/events/metrics-server-5c6f97fb75-9llbc.16fd3121f4881d7c" result: <nil>
	E0629 20:25:38.749714       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0629 20:25:38.750993       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0629 20:25:38.760064       1 trace.go:205] Trace[233392963]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-default-k8s-different-port-20220629201430-2408,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:e385e2d6-d19d-4992-b5c7-3f2abddd6c8c,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:25:33.430) (total time: 5329ms):
	Trace[233392963]: [5.3291501s] [5.3291501s] END
	E0629 20:25:38.760808       1 timeout.go:141] post-timeout activity - time-elapsed: 14.7673ms, GET "/api/v1/namespaces/kube-system/pods/kube-apiserver-default-k8s-different-port-20220629201430-2408" result: <nil>
	E0629 20:25:38.761513       1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0629 20:25:38.762896       1 trace.go:205] Trace[573860594]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20220629201430-2408,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:f1240996-6d4e-482a-83f2-6b1fe7d06c19,client:192.168.85.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:25:36.905) (total time: 1856ms):
	Trace[573860594]: [1.8569964s] [1.8569964s] END
	E0629 20:25:38.764789       1 timeout.go:141] post-timeout activity - time-elapsed: 18.5399ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20220629201430-2408" result: <nil>
	
	* 
	* ==> kube-controller-manager [cd6efa75b070] <==
	* I0629 20:24:14.346860       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.446690       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:24:14.447483       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.447642       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.447679       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.527661       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.527734       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.527772       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.527679       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:24:14.620096       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:24:14.620123       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.620157       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.620741       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.632570       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.632755       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:24:14.632793       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:24:14.632859       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:24:14.747545       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-shsp8"
	I0629 20:24:14.820392       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-jlpln"
	E0629 20:24:33.224171       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:24:33.620608       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 20:25:03.329669       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:25:03.734116       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 20:25:33.421632       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:25:33.768015       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [760bd2bff2d8] <==
	* I0629 20:24:08.725660       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 20:24:08.731835       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 20:24:08.816891       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 20:24:08.821289       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 20:24:08.825926       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 20:24:09.033346       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0629 20:24:09.033592       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0629 20:24:09.033875       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 20:24:09.722619       1 server_others.go:206] "Using iptables Proxier"
	I0629 20:24:09.722760       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 20:24:09.722857       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 20:24:09.722895       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 20:24:09.722946       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:24:09.724066       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:24:09.724415       1 server.go:661] "Version info" version="v1.24.2"
	I0629 20:24:09.724435       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:24:09.725890       1 config.go:317] "Starting service config controller"
	I0629 20:24:09.726485       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 20:24:09.726611       1 config.go:226] "Starting endpoint slice config controller"
	I0629 20:24:09.726624       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 20:24:09.728092       1 config.go:444] "Starting node config controller"
	I0629 20:24:09.728113       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 20:24:09.827557       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 20:24:09.827804       1 shared_informer.go:262] Caches are synced for service config
	I0629 20:24:09.828232       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e33ea872250b] <==
	* W0629 20:23:45.484980       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 20:23:45.485150       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 20:23:45.530631       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0629 20:23:45.530683       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0629 20:23:45.569203       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 20:23:45.569372       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 20:23:45.619835       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 20:23:45.620004       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 20:23:45.628730       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.628869       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.678153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.678324       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.693526       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 20:23:45.693673       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 20:23:45.787090       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0629 20:23:45.787241       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 20:23:45.831772       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.831824       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.847710       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0629 20:23:45.847847       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0629 20:23:45.919627       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 20:23:45.919709       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 20:23:46.023522       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 20:23:46.023575       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0629 20:23:48.338649       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 20:18:14 UTC, end at Wed 2022-06-29 20:26:42 UTC. --
	Jun 29 20:24:44 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:24:44.217009    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:24:44 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:24:44.217757    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:24:45 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:24:45.583740    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:24:45 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:24:45.584559    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:24:57 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:24:57.041849    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:25:07 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:07.231876    7688 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-default-k8s-different-port-20220629201430-2408.16fd312b55520178", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-default-k8s-different-port-20220629201430-2408", UID:"c3fb55440ccabb3f3dc30995c3d6f8e8", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserv
er}"}, Reason:"Unhealthy", Message:"Liveness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"default-k8s-different-port-20220629201430-2408"}, FirstTimestamp:time.Date(2022, time.June, 29, 20, 25, 0, 127003000, time.Local), LastTimestamp:time.Date(2022, time.June, 29, 20, 25, 0, 127003000, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:11.756489    7688 scope.go:110] "RemoveContainer" containerID="54fb2a5a3d3a9d7a89e27898edd5340e17b88ffd61cc1e5f943a93106b653f61"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:11.839267    7688 scope.go:110] "RemoveContainer" containerID="fd002faae47731d6783778db88d4800d8b27191f70b22d7426dae7f5bfbbc987"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:11.840084    7688 scope.go:110] "RemoveContainer" containerID="8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.840788    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.923852    7688 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.924012    7688 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.924324    7688 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vrqrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Probe
Handler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File
,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-9llbc_kube-system(a3bd48e4-1f00-4d3d-97ee-71c3d5984f20): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 29 20:25:11 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:11.924612    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-9llbc" podUID=a3bd48e4-1f00-4d3d-97ee-71c3d5984f20
	Jun 29 20:25:15 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:15.580669    7688 scope.go:110] "RemoveContainer" containerID="8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b"
	Jun 29 20:25:15 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:15.581248    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:25:27 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:27.042636    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-5c6f97fb75-9llbc" podUID=a3bd48e4-1f00-4d3d-97ee-71c3d5984f20
	Jun 29 20:25:30 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:30.037390    7688 scope.go:110] "RemoveContainer" containerID="8f46fd284463f57adefa31c3384d40f129e5a729ad3e66ecc81e3bcdae8db49b"
	Jun 29 20:25:30 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:30.038004    7688 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-shsp8_kubernetes-dashboard(73de8c45-d927-4bfa-9fb3-147155f2bf35)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-shsp8" podUID=73de8c45-d927-4bfa-9fb3-147155f2bf35
	Jun 29 20:25:34 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:34.087480    7688 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server-5c6f97fb75-9llbc.16fd3121f487c0e8", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"520", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"metrics-server-5c6f97fb75-9llbc", UID:"a3bd48e4-1f00-4d3d-97ee-71c3d5984f20", APIVersion:"v1", ResourceVersion:"403", FieldPath:"spec.containers{metrics-server}"}, Reason:"BackOff", Message:"Back-off pulling
image \"fake.domain/k8s.gcr.io/echoserver:1.4\"", Source:v1.EventSource{Component:"kubelet", Host:"default-k8s-different-port-20220629201430-2408"}, FirstTimestamp:time.Date(2022, time.June, 29, 20, 24, 19, 0, time.Local), LastTimestamp:time.Date(2022, time.June, 29, 20, 25, 27, 42500600, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	Jun 29 20:25:36 default-k8s-different-port-20220629201430-2408 kubelet[7688]: E0629 20:25:36.900721    7688 controller.go:187] failed to update lease, error: etcdserver: request timed out
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 kubelet[7688]: I0629 20:25:38.646230    7688 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 systemd[1]: kubelet.service: Succeeded.
	Jun 29 20:25:38 default-k8s-different-port-20220629201430-2408 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [28e64e74c203] <==
	* 2022/06/29 20:25:12 Starting overwatch
	2022/06/29 20:25:12 Using namespace: kubernetes-dashboard
	2022/06/29 20:25:12 Using in-cluster config to connect to apiserver
	2022/06/29 20:25:12 Using secret token for csrf signing
	2022/06/29 20:25:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 20:25:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 20:25:13 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 20:25:13 Generating JWE encryption key
	2022/06/29 20:25:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 20:25:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 20:25:13 Initializing JWE encryption key from synchronized object
	2022/06/29 20:25:13 Creating in-cluster Sidecar client
	2022/06/29 20:25:13 Serving insecurely on HTTP port: 9090
	2022/06/29 20:25:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [10a2f9728da8] <==
	* I0629 20:25:12.759949       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 20:25:12.852380       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 20:25:12.852554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [54fb2a5a3d3a] <==
	* k8s.io/client-go/util/workqueue.(*Type).Get(0xc0006159e0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00003ca00, 0x18e5530, 0xc000592080, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0000c6180)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0000c6180, 0x18b3d60, 0xc0000be270, 0x1, 0xc00058c120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000c6180, 0x3b9aca00, 0x0, 0x1, 0xc00058c120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0000c6180, 0x3b9aca00, 0xc00058c120)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	goroutine 181 [runnable]:
	k8s.io/client-go/tools/record.(*recorderImpl).generateEvent.func1(0xc00045b840, 0xc00003c280)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341
	created by k8s.io/client-go/tools/record.(*recorderImpl).generateEvent
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/tools/record/event.go:341 +0x3b7
	I0629 20:25:09.825337       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6bfa885-666d-4924-bcca-a35b45ce9842", APIVersion:"v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220629201430-2408_f619cd47-f3e6-4b82-a949-36583e88e085 stopped leading
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 20:26:41.887753    6988 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408: exit status 2 (7.4607445s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "default-k8s-different-port-20220629201430-2408" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (78.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (629.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220629200933-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220629200933-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (10m29.1352486s)

                                                
                                                
-- stdout --
	* [cilium-20220629200933-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cilium-20220629200933-2408 in cluster cilium-20220629200933-2408
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 20:27:28.780436    3204 out.go:296] Setting OutFile to fd 1684 ...
	I0629 20:27:28.847786    3204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:27:28.847786    3204 out.go:309] Setting ErrFile to fd 1984...
	I0629 20:27:28.847786    3204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:27:28.870790    3204 out.go:303] Setting JSON to false
	I0629 20:27:28.875782    3204 start.go:115] hostinfo: {"hostname":"minikube8","uptime":27011,"bootTime":1656507437,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 20:27:28.875782    3204 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 20:27:28.880789    3204 out.go:177] * [cilium-20220629200933-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 20:27:28.885781    3204 notify.go:193] Checking for updates...
	I0629 20:27:28.887790    3204 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:27:28.890961    3204 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 20:27:28.890961    3204 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 20:27:28.895790    3204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 20:27:28.898790    3204 config.go:178] Loaded profile config "auto-20220629200908-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:27:28.899792    3204 config.go:178] Loaded profile config "kindnet-20220629200924-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:27:28.899792    3204 config.go:178] Loaded profile config "newest-cni-20220629202523-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:27:28.899792    3204 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 20:27:32.385791    3204 docker.go:137] docker version: linux-20.10.16
	I0629 20:27:32.395183    3204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:27:34.576725    3204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1815272s)
	I0629 20:27:34.576725    3204 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:90 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-29 20:27:33.4976286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:27:34.585724    3204 out.go:177] * Using the docker driver based on user configuration
	I0629 20:27:34.589733    3204 start.go:284] selected driver: docker
	I0629 20:27:34.589733    3204 start.go:808] validating driver "docker" against <nil>
	I0629 20:27:34.589733    3204 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 20:27:34.655728    3204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:27:37.064196    3204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4084523s)
	I0629 20:27:37.064196    3204 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-29 20:27:35.8493218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:27:37.064196    3204 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 20:27:37.065220    3204 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 20:27:37.069557    3204 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 20:27:37.072694    3204 cni.go:95] Creating CNI manager for "cilium"
	I0629 20:27:37.072811    3204 start_flags.go:305] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0629 20:27:37.072885    3204 start_flags.go:310] config:
	{Name:cilium-20220629200933-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:cilium-20220629200933-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:27:37.077026    3204 out.go:177] * Starting control plane node cilium-20220629200933-2408 in cluster cilium-20220629200933-2408
	I0629 20:27:37.081582    3204 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 20:27:37.083205    3204 out.go:177] * Pulling base image ...
	I0629 20:27:37.087091    3204 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:27:37.087091    3204 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 20:27:37.087091    3204 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 20:27:37.087091    3204 cache.go:57] Caching tarball of preloaded images
	I0629 20:27:37.088108    3204 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 20:27:37.088108    3204 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 20:27:37.088108    3204 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\config.json ...
	I0629 20:27:37.088108    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\config.json: {Name:mk9816a6cfe1a6a8c15dca70545ffbdf484f3a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:27:38.368343    3204 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 20:27:38.368343    3204 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 20:27:38.368343    3204 cache.go:208] Successfully downloaded all kic artifacts
	I0629 20:27:38.368343    3204 start.go:352] acquiring machines lock for cilium-20220629200933-2408: {Name:mkcc41171095bbf0fce789676f90f3042f8784a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 20:27:38.368343    3204 start.go:356] acquired machines lock for "cilium-20220629200933-2408" in 0s
	I0629 20:27:38.368343    3204 start.go:91] Provisioning new machine with config: &{Name:cilium-20220629200933-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:cilium-20220629200933-2408 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:27:38.368343    3204 start.go:131] createHost starting for "" (driver="docker")
	I0629 20:27:38.371341    3204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0629 20:27:38.371341    3204 start.go:165] libmachine.API.Create for "cilium-20220629200933-2408" (driver="docker")
	I0629 20:27:38.372343    3204 client.go:168] LocalClient.Create starting
	I0629 20:27:38.373338    3204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0629 20:27:38.373338    3204 main.go:134] libmachine: Decoding PEM data...
	I0629 20:27:38.373338    3204 main.go:134] libmachine: Parsing certificate...
	I0629 20:27:38.373338    3204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0629 20:27:38.373338    3204 main.go:134] libmachine: Decoding PEM data...
	I0629 20:27:38.373338    3204 main.go:134] libmachine: Parsing certificate...
	I0629 20:27:38.382359    3204 cli_runner.go:164] Run: docker network inspect cilium-20220629200933-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 20:27:39.594771    3204 cli_runner.go:211] docker network inspect cilium-20220629200933-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 20:27:39.594771    3204 cli_runner.go:217] Completed: docker network inspect cilium-20220629200933-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2124038s)
	I0629 20:27:39.600773    3204 network_create.go:272] running [docker network inspect cilium-20220629200933-2408] to gather additional debugging logs...
	I0629 20:27:39.600773    3204 cli_runner.go:164] Run: docker network inspect cilium-20220629200933-2408
	W0629 20:27:40.848913    3204 cli_runner.go:211] docker network inspect cilium-20220629200933-2408 returned with exit code 1
	I0629 20:27:40.848913    3204 cli_runner.go:217] Completed: docker network inspect cilium-20220629200933-2408: (1.248132s)
	I0629 20:27:40.848913    3204 network_create.go:275] error running [docker network inspect cilium-20220629200933-2408]: docker network inspect cilium-20220629200933-2408: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220629200933-2408
	I0629 20:27:40.848913    3204 network_create.go:277] output of [docker network inspect cilium-20220629200933-2408]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220629200933-2408
	
	** /stderr **
	I0629 20:27:40.856895    3204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 20:27:42.059278    3204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2023759s)
	I0629 20:27:42.080281    3204 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000608590] misses:0}
	I0629 20:27:42.080281    3204 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:42.080281    3204 network_create.go:115] attempt to create docker network cilium-20220629200933-2408 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 20:27:42.087286    3204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408
	W0629 20:27:43.276307    3204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408 returned with exit code 1
	I0629 20:27:43.276485    3204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408: (1.1888991s)
	W0629 20:27:43.276485    3204 network_create.go:107] failed to create docker network cilium-20220629200933-2408 192.168.49.0/24, will retry: subnet is taken
	I0629 20:27:43.296551    3204 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:false}} dirty:map[] misses:0}
	I0629 20:27:43.296551    3204 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:43.316554    3204 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:true}} dirty:map[192.168.49.0:0xc000608590 192.168.58.0:0xc0005b0a68] misses:0}
	I0629 20:27:43.316554    3204 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:43.316554    3204 network_create.go:115] attempt to create docker network cilium-20220629200933-2408 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 20:27:43.325574    3204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408
	W0629 20:27:44.543173    3204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408 returned with exit code 1
	I0629 20:27:44.543173    3204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408: (1.2175909s)
	W0629 20:27:44.543173    3204 network_create.go:107] failed to create docker network cilium-20220629200933-2408 192.168.58.0/24, will retry: subnet is taken
	I0629 20:27:44.565176    3204 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:true}} dirty:map[192.168.49.0:0xc000608590 192.168.58.0:0xc0005b0a68] misses:1}
	I0629 20:27:44.565176    3204 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:44.584174    3204 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:true}} dirty:map[192.168.49.0:0xc000608590 192.168.58.0:0xc0005b0a68 192.168.67.0:0xc0001147c8] misses:1}
	I0629 20:27:44.584174    3204 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:44.584174    3204 network_create.go:115] attempt to create docker network cilium-20220629200933-2408 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 20:27:44.591174    3204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408
	W0629 20:27:45.808373    3204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408 returned with exit code 1
	I0629 20:27:45.808373    3204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408: (1.2171444s)
	W0629 20:27:45.808437    3204 network_create.go:107] failed to create docker network cilium-20220629200933-2408 192.168.67.0/24, will retry: subnet is taken
	I0629 20:27:45.827250    3204 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:true}} dirty:map[192.168.49.0:0xc000608590 192.168.58.0:0xc0005b0a68 192.168.67.0:0xc0001147c8] misses:2}
	I0629 20:27:45.827250    3204 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:45.848794    3204 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:true}} dirty:map[192.168.49.0:0xc000608590 192.168.58.0:0xc0005b0a68 192.168.67.0:0xc0001147c8 192.168.76.0:0xc0005b0b00] misses:2}
	I0629 20:27:45.848938    3204 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:45.848938    3204 network_create.go:115] attempt to create docker network cilium-20220629200933-2408 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0629 20:27:45.859275    3204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408
	W0629 20:27:47.114974    3204 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408 returned with exit code 1
	I0629 20:27:47.114974    3204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408: (1.2556907s)
	W0629 20:27:47.114974    3204 network_create.go:107] failed to create docker network cilium-20220629200933-2408 192.168.76.0/24, will retry: subnet is taken
	I0629 20:27:47.145297    3204 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:true}} dirty:map[192.168.49.0:0xc000608590 192.168.58.0:0xc0005b0a68 192.168.67.0:0xc0001147c8 192.168.76.0:0xc0005b0b00] misses:3}
	I0629 20:27:47.145297    3204 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:47.165728    3204 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000608590] amended:true}} dirty:map[192.168.49.0:0xc000608590 192.168.58.0:0xc0005b0a68 192.168.67.0:0xc0001147c8 192.168.76.0:0xc0005b0b00 192.168.85.0:0xc000608628] misses:3}
	I0629 20:27:47.165728    3204 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:27:47.165728    3204 network_create.go:115] attempt to create docker network cilium-20220629200933-2408 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0629 20:27:47.174556    3204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408
	I0629 20:27:48.530477    3204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-20220629200933-2408 cilium-20220629200933-2408: (1.3559121s)
	I0629 20:27:48.530477    3204 network_create.go:99] docker network cilium-20220629200933-2408 192.168.85.0/24 created
	I0629 20:27:48.530477    3204 kic.go:106] calculated static IP "192.168.85.2" for the "cilium-20220629200933-2408" container
	I0629 20:27:48.554355    3204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 20:27:49.768839    3204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2143577s)
	I0629 20:27:49.776947    3204 cli_runner.go:164] Run: docker volume create cilium-20220629200933-2408 --label name.minikube.sigs.k8s.io=cilium-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true
	I0629 20:27:51.061354    3204 cli_runner.go:217] Completed: docker volume create cilium-20220629200933-2408 --label name.minikube.sigs.k8s.io=cilium-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true: (1.2843983s)
	I0629 20:27:51.061354    3204 oci.go:103] Successfully created a docker volume cilium-20220629200933-2408
	I0629 20:27:51.074340    3204 cli_runner.go:164] Run: docker run --rm --name cilium-20220629200933-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220629200933-2408 --entrypoint /usr/bin/test -v cilium-20220629200933-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 20:27:54.177147    3204 cli_runner.go:217] Completed: docker run --rm --name cilium-20220629200933-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220629200933-2408 --entrypoint /usr/bin/test -v cilium-20220629200933-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib: (3.101849s)
	I0629 20:27:54.177147    3204 oci.go:107] Successfully prepared a docker volume cilium-20220629200933-2408
	I0629 20:27:54.177147    3204 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:27:54.177147    3204 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 20:27:54.183140    3204 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220629200933-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 20:28:18.888886    3204 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220629200933-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (24.7055868s)
	I0629 20:28:18.888886    3204 kic.go:188] duration metric: took 24.711581 seconds to extract preloaded images to volume
	I0629 20:28:18.896376    3204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:28:21.465454    3204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5690612s)
	I0629 20:28:21.465454    3204 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:84 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-29 20:28:20.2285162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:28:21.473444    3204 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 20:28:24.089945    3204 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.6162214s)
	I0629 20:28:24.098953    3204 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220629200933-2408 --name cilium-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220629200933-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220629200933-2408 --network cilium-20220629200933-2408 --ip 192.168.85.2 --volume cilium-20220629200933-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 20:28:26.726852    3204 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220629200933-2408 --name cilium-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220629200933-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220629200933-2408 --network cilium-20220629200933-2408 --ip 192.168.85.2 --volume cilium-20220629200933-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e: (2.6278819s)
	I0629 20:28:26.750831    3204 cli_runner.go:164] Run: docker container inspect cilium-20220629200933-2408 --format={{.State.Running}}
	I0629 20:28:28.208589    3204 cli_runner.go:217] Completed: docker container inspect cilium-20220629200933-2408 --format={{.State.Running}}: (1.4577487s)
	I0629 20:28:28.215576    3204 cli_runner.go:164] Run: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}
	I0629 20:28:29.570051    3204 cli_runner.go:217] Completed: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}: (1.3544662s)
	I0629 20:28:29.577049    3204 cli_runner.go:164] Run: docker exec cilium-20220629200933-2408 stat /var/lib/dpkg/alternatives/iptables
	I0629 20:28:31.208419    3204 cli_runner.go:217] Completed: docker exec cilium-20220629200933-2408 stat /var/lib/dpkg/alternatives/iptables: (1.6312122s)
	I0629 20:28:31.208419    3204 oci.go:144] the created container "cilium-20220629200933-2408" has a running status.
	I0629 20:28:31.208494    3204 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa...
	I0629 20:28:31.710577    3204 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 20:28:33.240648    3204 cli_runner.go:164] Run: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}
	I0629 20:28:34.556630    3204 cli_runner.go:217] Completed: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}: (1.3159738s)
	I0629 20:28:34.572633    3204 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 20:28:34.572633    3204 kic_runner.go:114] Args: [docker exec --privileged cilium-20220629200933-2408 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 20:28:36.105590    3204 kic_runner.go:123] Done: [docker exec --privileged cilium-20220629200933-2408 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.5329473s)
	I0629 20:28:36.109608    3204 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa...
	I0629 20:28:36.837475    3204 cli_runner.go:164] Run: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}
	I0629 20:28:38.247110    3204 cli_runner.go:217] Completed: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}: (1.4096257s)
	I0629 20:28:38.247110    3204 machine.go:88] provisioning docker machine ...
	I0629 20:28:38.247110    3204 ubuntu.go:169] provisioning hostname "cilium-20220629200933-2408"
	I0629 20:28:38.254111    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:39.589757    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.3356379s)
	I0629 20:28:39.593751    3204 main.go:134] libmachine: Using SSH client type: native
	I0629 20:28:39.601769    3204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57612 <nil> <nil>}
	I0629 20:28:39.601769    3204 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-20220629200933-2408 && echo "cilium-20220629200933-2408" | sudo tee /etc/hostname
	I0629 20:28:39.800007    3204 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-20220629200933-2408
	
	I0629 20:28:39.809006    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:41.126571    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.3175562s)
	I0629 20:28:41.132557    3204 main.go:134] libmachine: Using SSH client type: native
	I0629 20:28:41.133591    3204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57612 <nil> <nil>}
	I0629 20:28:41.133591    3204 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220629200933-2408' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220629200933-2408/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220629200933-2408' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 20:28:41.361392    3204 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 20:28:41.361392    3204 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0629 20:28:41.361392    3204 ubuntu.go:177] setting up certificates
	I0629 20:28:41.361392    3204 provision.go:83] configureAuth start
	I0629 20:28:41.368384    3204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220629200933-2408
	I0629 20:28:42.653902    3204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220629200933-2408: (1.2855097s)
	I0629 20:28:42.653902    3204 provision.go:138] copyHostCerts
	I0629 20:28:42.653902    3204 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0629 20:28:42.653902    3204 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0629 20:28:42.654912    3204 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0629 20:28:42.655916    3204 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0629 20:28:42.655916    3204 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0629 20:28:42.656906    3204 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0629 20:28:42.657917    3204 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0629 20:28:42.657917    3204 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0629 20:28:42.658914    3204 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0629 20:28:42.659936    3204 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220629200933-2408 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220629200933-2408]
	I0629 20:28:42.905063    3204 provision.go:172] copyRemoteCerts
	I0629 20:28:42.920074    3204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 20:28:42.926097    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:44.180352    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.2541248s)
	I0629 20:28:44.180694    3204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57612 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa Username:docker}
	I0629 20:28:44.352691    3204 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4325094s)
	I0629 20:28:44.353189    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 20:28:44.428463    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0629 20:28:44.486514    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 20:28:44.554997    3204 provision.go:86] duration metric: configureAuth took 3.1935483s
	I0629 20:28:44.555079    3204 ubuntu.go:193] setting minikube options for container-runtime
	I0629 20:28:44.556250    3204 config.go:178] Loaded profile config "cilium-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:28:44.570804    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:45.827085    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.2561735s)
	I0629 20:28:45.832804    3204 main.go:134] libmachine: Using SSH client type: native
	I0629 20:28:45.833528    3204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57612 <nil> <nil>}
	I0629 20:28:45.833619    3204 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 20:28:46.048225    3204 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 20:28:46.048225    3204 ubuntu.go:71] root file system type: overlay
	I0629 20:28:46.049104    3204 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 20:28:46.063909    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:47.403758    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.3398407s)
	I0629 20:28:47.408707    3204 main.go:134] libmachine: Using SSH client type: native
	I0629 20:28:47.408707    3204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57612 <nil> <nil>}
	I0629 20:28:47.408707    3204 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 20:28:47.637554    3204 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 20:28:47.654610    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:49.062521    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.4079026s)
	I0629 20:28:49.068527    3204 main.go:134] libmachine: Using SSH client type: native
	I0629 20:28:49.068527    3204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57612 <nil> <nil>}
	I0629 20:28:49.069577    3204 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 20:28:50.702412    3204 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 20:28:47.618470000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0629 20:28:50.702412    3204 machine.go:91] provisioned docker machine in 12.4552226s
	I0629 20:28:50.702412    3204 client.go:171] LocalClient.Create took 1m12.3296034s
	I0629 20:28:50.702412    3204 start.go:173] duration metric: libmachine.API.Create for "cilium-20220629200933-2408" took 1m12.3306059s
	I0629 20:28:50.702412    3204 start.go:306] post-start starting for "cilium-20220629200933-2408" (driver="docker")
	I0629 20:28:50.702412    3204 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 20:28:50.712395    3204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 20:28:50.724416    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:52.121568    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.3971428s)
	I0629 20:28:52.121568    3204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57612 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa Username:docker}
	I0629 20:28:52.273400    3204 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.560919s)
	I0629 20:28:52.283370    3204 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 20:28:52.293384    3204 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 20:28:52.293384    3204 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 20:28:52.293384    3204 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 20:28:52.293384    3204 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 20:28:52.293384    3204 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0629 20:28:52.293384    3204 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0629 20:28:52.294387    3204 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> 24082.pem in /etc/ssl/certs
	I0629 20:28:52.303371    3204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 20:28:52.326794    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /etc/ssl/certs/24082.pem (1708 bytes)
	I0629 20:28:52.395466    3204 start.go:309] post-start completed in 1.6930434s
	I0629 20:28:52.404459    3204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220629200933-2408
	I0629 20:28:53.681300    3204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220629200933-2408: (1.2768331s)
	I0629 20:28:53.681300    3204 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\config.json ...
	I0629 20:28:53.692309    3204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 20:28:53.701992    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:54.990131    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.2881304s)
	I0629 20:28:54.990131    3204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57612 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa Username:docker}
	I0629 20:28:55.124794    3204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4324764s)
	I0629 20:28:55.132790    3204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 20:28:55.222216    3204 start.go:134] duration metric: createHost completed in 1m16.8533781s
	I0629 20:28:55.222216    3204 start.go:81] releasing machines lock for "cilium-20220629200933-2408", held for 1m16.8533781s
	I0629 20:28:55.231209    3204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220629200933-2408
	I0629 20:28:56.386916    3204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220629200933-2408: (1.1556988s)
	I0629 20:28:56.389921    3204 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 20:28:56.400920    3204 ssh_runner.go:195] Run: systemctl --version
	I0629 20:28:56.402933    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:56.410735    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:28:57.618105    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.2073622s)
	I0629 20:28:57.618105    3204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57612 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa Username:docker}
	I0629 20:28:57.634108    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.2311667s)
	I0629 20:28:57.634108    3204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57612 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa Username:docker}
	I0629 20:28:57.805631    3204 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4157009s)
	I0629 20:28:57.805841    3204 ssh_runner.go:235] Completed: systemctl --version: (1.4048703s)
	I0629 20:28:57.825608    3204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 20:28:57.855272    3204 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0629 20:28:57.905231    3204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 20:28:58.061270    3204 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 20:28:58.278465    3204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 20:28:58.308688    3204 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 20:28:58.321819    3204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 20:28:58.351453    3204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 20:28:58.392449    3204 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 20:28:58.616447    3204 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 20:28:58.805413    3204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 20:28:58.984549    3204 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 20:28:59.538720    3204 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 20:28:59.768141    3204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 20:28:59.978448    3204 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 20:29:00.007744    3204 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 20:29:00.022789    3204 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 20:29:00.040398    3204 start.go:468] Will wait 60s for crictl version
	I0629 20:29:00.049393    3204 ssh_runner.go:195] Run: sudo crictl version
	I0629 20:29:00.143847    3204 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 20:29:00.152933    3204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 20:29:00.239529    3204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 20:29:00.340862    3204 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 20:29:00.348011    3204 cli_runner.go:164] Run: docker exec -t cilium-20220629200933-2408 dig +short host.docker.internal
	I0629 20:29:01.676557    3204 cli_runner.go:217] Completed: docker exec -t cilium-20220629200933-2408 dig +short host.docker.internal: (1.3285368s)
	I0629 20:29:01.676557    3204 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 20:29:01.685557    3204 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 20:29:01.702544    3204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 20:29:01.739548    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:29:02.933332    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.1935361s)
	I0629 20:29:02.933800    3204 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:29:02.946728    3204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 20:29:03.040394    3204 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 20:29:03.040394    3204 docker.go:533] Images already preloaded, skipping extraction
	I0629 20:29:03.053290    3204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 20:29:03.154136    3204 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 20:29:03.154136    3204 cache_images.go:84] Images are preloaded, skipping loading
	I0629 20:29:03.161175    3204 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 20:29:03.365967    3204 cni.go:95] Creating CNI manager for "cilium"
	I0629 20:29:03.365967    3204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 20:29:03.365967    3204 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20220629200933-2408 NodeName:cilium-20220629200933-2408 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 20:29:03.366963    3204 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cilium-20220629200933-2408"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 20:29:03.366963    3204 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-20220629200933-2408 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:cilium-20220629200933-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0629 20:29:03.376959    3204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 20:29:03.396959    3204 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 20:29:03.406957    3204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 20:29:03.433285    3204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0629 20:29:03.478945    3204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 20:29:03.510955    3204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0629 20:29:03.565929    3204 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0629 20:29:03.584137    3204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 20:29:03.610368    3204 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408 for IP: 192.168.85.2
	I0629 20:29:03.611352    3204 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0629 20:29:03.612698    3204 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0629 20:29:03.613585    3204 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\client.key
	I0629 20:29:03.613827    3204 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\client.crt with IP's: []
	I0629 20:29:03.732294    3204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\client.crt ...
	I0629 20:29:03.732294    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\client.crt: {Name:mk9aafd861f460e5e93f11003d0e0f67ea5ec869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:03.734320    3204 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\client.key ...
	I0629 20:29:03.734320    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\client.key: {Name:mk32454c26ee8f0372717ffe94a8067d7abd5e78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:03.735312    3204 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.key.43b9df8c
	I0629 20:29:03.736302    3204 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0629 20:29:04.051570    3204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.crt.43b9df8c ...
	I0629 20:29:04.052557    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.crt.43b9df8c: {Name:mk749fb2468eea7cbf51c2d66ebb3b85e122a69b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:04.053558    3204 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.key.43b9df8c ...
	I0629 20:29:04.053558    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.key.43b9df8c: {Name:mk04a7ef92fbac27979dcb7bf208fcbe069e8735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:04.054561    3204 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.crt
	I0629 20:29:04.061556    3204 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.key
	I0629 20:29:04.062965    3204 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.key
	I0629 20:29:04.063401    3204 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.crt with IP's: []
	I0629 20:29:04.175878    3204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.crt ...
	I0629 20:29:04.175878    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.crt: {Name:mkd2f4655cb53d9a4b8fc834d1f1fd736772ed60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:04.177284    3204 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.key ...
	I0629 20:29:04.177284    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.key: {Name:mk7ef463a889a8d2057fd6320abe3855b24477f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:04.186891    3204 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem (1338 bytes)
	W0629 20:29:04.186891    3204 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408_empty.pem, impossibly tiny 0 bytes
	I0629 20:29:04.186891    3204 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0629 20:29:04.187448    3204 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0629 20:29:04.187786    3204 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0629 20:29:04.187969    3204 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0629 20:29:04.188249    3204 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem (1708 bytes)
	I0629 20:29:04.188764    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 20:29:04.268598    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 20:29:04.338494    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 20:29:04.393064    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220629200933-2408\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 20:29:04.454587    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 20:29:04.511846    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 20:29:04.638729    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 20:29:04.718044    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 20:29:04.776053    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /usr/share/ca-certificates/24082.pem (1708 bytes)
	I0629 20:29:04.828085    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 20:29:04.891047    3204 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem --> /usr/share/ca-certificates/2408.pem (1338 bytes)
	I0629 20:29:04.960141    3204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 20:29:05.016122    3204 ssh_runner.go:195] Run: openssl version
	I0629 20:29:05.048125    3204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24082.pem && ln -fs /usr/share/ca-certificates/24082.pem /etc/ssl/certs/24082.pem"
	I0629 20:29:05.088740    3204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24082.pem
	I0629 20:29:05.102730    3204 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 20:29:05.116737    3204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24082.pem
	I0629 20:29:05.150741    3204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24082.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 20:29:05.192685    3204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 20:29:05.233669    3204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 20:29:05.244684    3204 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 20:29:05.255662    3204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 20:29:05.301992    3204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 20:29:05.334976    3204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2408.pem && ln -fs /usr/share/ca-certificates/2408.pem /etc/ssl/certs/2408.pem"
	I0629 20:29:05.366840    3204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2408.pem
	I0629 20:29:05.376849    3204 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 20:29:05.388720    3204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2408.pem
	I0629 20:29:05.432125    3204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2408.pem /etc/ssl/certs/51391683.0"
	I0629 20:29:05.454073    3204 kubeadm.go:395] StartCluster: {Name:cilium-20220629200933-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:cilium-20220629200933-2408 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:29:05.461092    3204 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 20:29:05.568525    3204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 20:29:05.626600    3204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 20:29:05.747328    3204 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 20:29:05.765358    3204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 20:29:05.795338    3204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 20:29:05.795338    3204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 20:29:34.429347    3204 out.go:204]   - Generating certificates and keys ...
	I0629 20:29:34.434328    3204 out.go:204]   - Booting up control plane ...
	I0629 20:29:34.439338    3204 out.go:204]   - Configuring RBAC rules ...
	I0629 20:29:34.443324    3204 cni.go:95] Creating CNI manager for "cilium"
	I0629 20:29:34.448330    3204 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0629 20:29:34.462345    3204 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0629 20:29:34.554586    3204 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0629 20:29:34.554586    3204 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0629 20:29:34.554586    3204 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0629 20:29:34.554586    3204 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0629 20:29:34.554586    3204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0629 20:29:34.852926    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0629 20:29:38.766641    3204 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.9126989s)
	I0629 20:29:38.766641    3204 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 20:29:38.781634    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:38.782640    3204 ops.go:34] apiserver oom_adj: -16
	I0629 20:29:38.785639    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=cilium-20220629200933-2408 minikube.k8s.io/updated_at=2022_06_29T20_29_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:39.153066    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:39.892048    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:40.384795    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:40.882729    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:41.383679    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:41.886898    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:42.392049    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:42.883896    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:43.375128    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:43.880717    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:44.381101    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:44.887237    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:45.374128    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:45.886485    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:46.388838    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:47.388698    3204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:29:53.834100    3204 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (6.4453617s)
	I0629 20:29:53.834218    3204 kubeadm.go:1045] duration metric: took 15.0674822s to wait for elevateKubeSystemPrivileges.
	I0629 20:29:53.834293    3204 kubeadm.go:397] StartCluster complete in 48.3799151s
	I0629 20:29:53.834352    3204 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:53.834867    3204 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:29:53.839076    3204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:29:54.657367    3204 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220629200933-2408" rescaled to 1
	I0629 20:29:54.657533    3204 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:29:54.657598    3204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 20:29:54.663128    3204 out.go:177] * Verifying Kubernetes components...
	I0629 20:29:54.657747    3204 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0629 20:29:54.658336    3204 config.go:178] Loaded profile config "cilium-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:29:54.669511    3204 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220629200933-2408"
	I0629 20:29:54.669511    3204 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220629200933-2408"
	W0629 20:29:54.669511    3204 addons.go:162] addon storage-provisioner should already be in state true
	I0629 20:29:54.670058    3204 host.go:66] Checking if "cilium-20220629200933-2408" exists ...
	I0629 20:29:54.670529    3204 addons.go:65] Setting default-storageclass=true in profile "cilium-20220629200933-2408"
	I0629 20:29:54.670674    3204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220629200933-2408"
	I0629 20:29:54.682560    3204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 20:29:54.691630    3204 cli_runner.go:164] Run: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}
	I0629 20:29:54.697904    3204 cli_runner.go:164] Run: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}
	I0629 20:29:55.354379    3204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 20:29:55.367380    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:29:56.286585    3204 cli_runner.go:217] Completed: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}: (1.5948384s)
	I0629 20:29:56.289686    3204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 20:29:56.293005    3204 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 20:29:56.293054    3204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 20:29:56.302182    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:29:56.305090    3204 cli_runner.go:217] Completed: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}: (1.6071764s)
	I0629 20:29:56.341907    3204 addons.go:153] Setting addon default-storageclass=true in "cilium-20220629200933-2408"
	W0629 20:29:56.341952    3204 addons.go:162] addon default-storageclass should already be in state true
	I0629 20:29:56.341952    3204 host.go:66] Checking if "cilium-20220629200933-2408" exists ...
	I0629 20:29:56.371284    3204 cli_runner.go:164] Run: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}
	I0629 20:29:56.949822    3204 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.5953687s)
	I0629 20:29:56.949929    3204 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 20:29:56.995015    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.6275161s)
	I0629 20:29:56.999587    3204 node_ready.go:35] waiting up to 5m0s for node "cilium-20220629200933-2408" to be "Ready" ...
	I0629 20:29:57.057123    3204 node_ready.go:49] node "cilium-20220629200933-2408" has status "Ready":"True"
	I0629 20:29:57.057123    3204 node_ready.go:38] duration metric: took 57.5356ms waiting for node "cilium-20220629200933-2408" to be "Ready" ...
	I0629 20:29:57.057123    3204 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 20:29:57.157861    3204 pod_ready.go:78] waiting up to 5m0s for pod "cilium-c6rx7" in "kube-system" namespace to be "Ready" ...
	I0629 20:29:57.954320    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.652091s)
	I0629 20:29:57.955752    3204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57612 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa Username:docker}
	I0629 20:29:57.987134    3204 cli_runner.go:217] Completed: docker container inspect cilium-20220629200933-2408 --format={{.State.Status}}: (1.6157858s)
	I0629 20:29:57.987308    3204 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 20:29:57.987308    3204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 20:29:57.996959    3204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408
	I0629 20:29:58.883394    3204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 20:29:59.561307    3204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629200933-2408: (1.5642739s)
	I0629 20:29:59.561754    3204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57612 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220629200933-2408\id_rsa Username:docker}
	I0629 20:29:59.563650    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:00.285640    3204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 20:30:01.855378    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:01.968644    3204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.0852312s)
	I0629 20:30:02.872260    3204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.5866038s)
	I0629 20:30:02.895106    3204 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0629 20:30:02.901844    3204 addons.go:414] enableAddons completed in 8.2439088s
	I0629 20:30:03.950577    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:06.454411    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:08.859658    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:11.737690    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:13.860629    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:16.346261    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:18.846103    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:21.347735    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:29.475080    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:31.930159    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:34.347638    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:36.349254    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:38.425395    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:40.426789    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:42.728994    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:44.927214    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:46.928321    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:49.821935    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:51.850078    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:53.928324    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:57.045666    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:59.429983    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:01.937267    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:04.328058    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:06.339745    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:08.439201    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:10.926673    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:13.342502    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:15.832616    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:17.842942    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:20.042017    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:22.532471    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:24.925861    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:27.335089    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:29.496747    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:31.780124    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:33.825911    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:36.429527    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:38.782915    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:41.295604    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:43.791983    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:45.830137    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:48.294681    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:50.781783    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:53.284599    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:55.350570    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:57.781970    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:59.926364    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:02.525967    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:04.786400    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:06.798032    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:09.289331    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:11.777927    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:13.787104    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:15.788521    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:17.791496    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:20.326401    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:22.777739    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:30.525784    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:32.781845    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:34.844641    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:37.277108    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:39.294545    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:41.329924    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:43.781690    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:46.329359    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:48.785301    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:50.849919    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:53.334463    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:55.786707    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:57.798744    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:00.291582    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:02.786314    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:05.281581    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:07.282490    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:09.798510    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:11.864075    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:14.288891    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:16.292185    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:18.791751    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:21.279094    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:23.283617    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:25.356586    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:28.412099    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:30.785922    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:32.796484    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:35.348770    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:37.781979    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:39.787747    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:41.930122    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:44.291250    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:46.295625    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:48.773141    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:50.804403    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:53.277629    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:55.344048    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:57.353782    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:57.431286    3204 pod_ready.go:81] duration metric: took 4m0.2719694s waiting for pod "cilium-c6rx7" in "kube-system" namespace to be "Ready" ...
	E0629 20:33:57.431343    3204 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0629 20:33:57.431343    3204 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-66599cf69-snpdz" in "kube-system" namespace to be "Ready" ...
	I0629 20:33:57.466759    3204 pod_ready.go:92] pod "cilium-operator-66599cf69-snpdz" in "kube-system" namespace has status "Ready":"True"
	I0629 20:33:57.466759    3204 pod_ready.go:81] duration metric: took 35.4152ms waiting for pod "cilium-operator-66599cf69-snpdz" in "kube-system" namespace to be "Ready" ...
	I0629 20:33:57.466896    3204 pod_ready.go:78] waiting up to 5m0s for pod "coredns-6d4b75cb6d-vqnjz" in "kube-system" namespace to be "Ready" ...
	I0629 20:33:57.478265    3204 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-vqnjz" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-vqnjz" not found
	I0629 20:33:57.478265    3204 pod_ready.go:81] duration metric: took 11.3692ms waiting for pod "coredns-6d4b75cb6d-vqnjz" in "kube-system" namespace to be "Ready" ...
	E0629 20:33:57.478265    3204 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-vqnjz" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-vqnjz" not found
	I0629 20:33:57.478383    3204 pod_ready.go:78] waiting up to 5m0s for pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace to be "Ready" ...
	I0629 20:33:59.568994    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:01.577077    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:04.072231    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:06.563415    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:08.564050    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:11.060142    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:13.073042    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:15.581456    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:18.067765    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:20.130440    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:22.570727    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:24.570797    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:26.571036    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:29.058563    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:31.081893    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:33.566266    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:35.573324    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:37.581639    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:40.068480    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:42.081764    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:44.557112    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:46.558946    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:48.561428    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:51.072733    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:53.565526    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:56.054834    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:58.061086    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:00.079833    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:02.577116    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:05.053297    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:07.071539    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:09.557200    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:11.560350    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:13.562925    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:15.569877    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:18.055946    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:20.056878    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:22.063844    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:24.067792    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:26.080196    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:28.561814    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:31.057852    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:33.069039    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:35.565024    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:38.068555    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:40.560823    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:43.054623    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:45.055853    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:47.056409    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:49.063746    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:51.066585    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:53.562565    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:56.063173    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:58.066986    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:00.561367    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:02.564182    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:05.071049    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:07.559806    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:09.577439    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:12.060220    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:14.066374    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:16.554580    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:18.569022    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:20.576361    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:23.059479    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:25.137674    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:27.564748    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:29.574448    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:32.056466    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:34.060793    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:36.063767    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:38.069473    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:40.553792    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:42.559877    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:45.059746    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:47.064662    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:49.649418    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:52.066602    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:54.563606    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:57.057163    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:59.067345    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:01.566778    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:04.067081    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:06.564011    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:08.568424    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:11.056077    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:13.066863    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:15.557209    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:17.560142    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:19.566053    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:22.055530    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:24.060385    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:26.067996    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:28.559402    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:31.062201    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:33.564465    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:36.065604    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:38.571559    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:40.627135    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:43.052460    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:45.080395    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:47.563118    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:49.566288    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:51.576066    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:54.068190    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:56.071149    3204 pod_ready.go:102] pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:57.591994    3204 pod_ready.go:81] duration metric: took 4m0.112194s waiting for pod "coredns-6d4b75cb6d-zjmxn" in "kube-system" namespace to be "Ready" ...
	E0629 20:37:57.591994    3204 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0629 20:37:57.591994    3204 pod_ready.go:38] duration metric: took 8m0.5319974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 20:37:57.594974    3204 out.go:177] 
	W0629 20:37:57.597973    3204 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0629 20:37:57.597973    3204 out.go:239] * 
	* 
	W0629 20:37:57.599975    3204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 20:37:57.602972    3204 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (629.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (645.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220629200933-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220629200933-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (10m45.3335011s)

                                                
                                                
-- stdout --
	* [calico-20220629200933-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node calico-20220629200933-2408 in cluster calico-20220629200933-2408
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 20:29:57.812606   11200 out.go:296] Setting OutFile to fd 1636 ...
	I0629 20:29:57.935263   11200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:29:57.935356   11200 out.go:309] Setting ErrFile to fd 1916...
	I0629 20:29:57.935421   11200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:29:57.979024   11200 out.go:303] Setting JSON to false
	I0629 20:29:57.981628   11200 start.go:115] hostinfo: {"hostname":"minikube8","uptime":27160,"bootTime":1656507437,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 20:29:57.981628   11200 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 20:29:57.982241   11200 out.go:177] * [calico-20220629200933-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 20:29:57.989749   11200 notify.go:193] Checking for updates...
	I0629 20:29:57.991929   11200 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:29:57.994199   11200 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 20:29:57.996890   11200 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 20:29:57.999032   11200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 20:29:58.002154   11200 config.go:178] Loaded profile config "cilium-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:29:58.002772   11200 config.go:178] Loaded profile config "kindnet-20220629200924-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:29:58.003369   11200 config.go:178] Loaded profile config "newest-cni-20220629202523-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:29:58.003522   11200 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 20:30:02.918648   11200 docker.go:137] docker version: linux-20.10.16
	I0629 20:30:02.948069   11200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:30:07.226975   11200 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (4.2788789s)
	I0629 20:30:07.228995   11200 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-29 20:30:05.1878749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:30:07.234960   11200 out.go:177] * Using the docker driver based on user configuration
	I0629 20:30:07.237340   11200 start.go:284] selected driver: docker
	I0629 20:30:07.237340   11200 start.go:808] validating driver "docker" against <nil>
	I0629 20:30:07.237460   11200 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 20:30:07.361838   11200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:30:10.881402   11200 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (3.5195418s)
	I0629 20:30:10.881402   11200 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-06-29 20:30:09.2248378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:30:10.882138   11200 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 20:30:10.882908   11200 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 20:30:10.887807   11200 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 20:30:10.890818   11200 cni.go:95] Creating CNI manager for "calico"
	I0629 20:30:10.890818   11200 start_flags.go:305] Found "Calico" CNI - setting NetworkPlugin=cni
	I0629 20:30:10.890818   11200 start_flags.go:310] config:
	{Name:calico-20220629200933-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:calico-20220629200933-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:30:10.895991   11200 out.go:177] * Starting control plane node calico-20220629200933-2408 in cluster calico-20220629200933-2408
	I0629 20:30:10.897206   11200 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 20:30:10.900224   11200 out.go:177] * Pulling base image ...
	I0629 20:30:10.904257   11200 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:30:10.904363   11200 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 20:30:10.904559   11200 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 20:30:10.904612   11200 cache.go:57] Caching tarball of preloaded images
	I0629 20:30:10.904848   11200 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 20:30:10.904848   11200 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 20:30:10.904848   11200 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\config.json ...
	I0629 20:30:10.904848   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\config.json: {Name:mk1359f5f93f7a4a80f7ed5e59663b99a5634be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:30:12.611816   11200 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 20:30:12.611816   11200 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 20:30:12.611816   11200 cache.go:208] Successfully downloaded all kic artifacts
	I0629 20:30:12.615062   11200 start.go:352] acquiring machines lock for calico-20220629200933-2408: {Name:mkdb21b222bd19372720dcf02c35a257cfbf201b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 20:30:12.615062   11200 start.go:356] acquired machines lock for "calico-20220629200933-2408" in 0s
	I0629 20:30:12.615062   11200 start.go:91] Provisioning new machine with config: &{Name:calico-20220629200933-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:calico-20220629200933-2408 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:30:12.615669   11200 start.go:131] createHost starting for "" (driver="docker")
	I0629 20:30:12.618660   11200 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0629 20:30:12.619106   11200 start.go:165] libmachine.API.Create for "calico-20220629200933-2408" (driver="docker")
	I0629 20:30:12.619106   11200 client.go:168] LocalClient.Create starting
	I0629 20:30:12.619715   11200 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0629 20:30:12.619925   11200 main.go:134] libmachine: Decoding PEM data...
	I0629 20:30:12.620084   11200 main.go:134] libmachine: Parsing certificate...
	I0629 20:30:12.620316   11200 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0629 20:30:12.620566   11200 main.go:134] libmachine: Decoding PEM data...
	I0629 20:30:12.620566   11200 main.go:134] libmachine: Parsing certificate...
	I0629 20:30:12.651515   11200 cli_runner.go:164] Run: docker network inspect calico-20220629200933-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 20:30:14.110795   11200 cli_runner.go:211] docker network inspect calico-20220629200933-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 20:30:14.110898   11200 cli_runner.go:217] Completed: docker network inspect calico-20220629200933-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.4592172s)
	I0629 20:30:14.120962   11200 network_create.go:272] running [docker network inspect calico-20220629200933-2408] to gather additional debugging logs...
	I0629 20:30:14.120962   11200 cli_runner.go:164] Run: docker network inspect calico-20220629200933-2408
	W0629 20:30:15.624692   11200 cli_runner.go:211] docker network inspect calico-20220629200933-2408 returned with exit code 1
	I0629 20:30:15.624692   11200 cli_runner.go:217] Completed: docker network inspect calico-20220629200933-2408: (1.503721s)
	I0629 20:30:15.624692   11200 network_create.go:275] error running [docker network inspect calico-20220629200933-2408]: docker network inspect calico-20220629200933-2408: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220629200933-2408
	I0629 20:30:15.624692   11200 network_create.go:277] output of [docker network inspect calico-20220629200933-2408]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220629200933-2408
	
	** /stderr **
	I0629 20:30:15.646048   11200 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 20:30:17.017766   11200 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3715346s)
	I0629 20:30:17.057102   11200 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00030e4a8] misses:0}
	I0629 20:30:17.057102   11200 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:17.057102   11200 network_create.go:115] attempt to create docker network calico-20220629200933-2408 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 20:30:17.065117   11200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408
	W0629 20:30:18.462892   11200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408 returned with exit code 1
	I0629 20:30:18.462960   11200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408: (1.397417s)
	W0629 20:30:18.462960   11200 network_create.go:107] failed to create docker network calico-20220629200933-2408 192.168.49.0/24, will retry: subnet is taken
	I0629 20:30:18.479060   11200 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030e4a8] amended:false}} dirty:map[] misses:0}
	I0629 20:30:18.479060   11200 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:18.505223   11200 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030e4a8] amended:true}} dirty:map[192.168.49.0:0xc00030e4a8 192.168.58.0:0xc000aa4270] misses:0}
	I0629 20:30:18.505223   11200 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:18.505223   11200 network_create.go:115] attempt to create docker network calico-20220629200933-2408 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 20:30:18.510130   11200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408
	W0629 20:30:19.957368   11200 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408 returned with exit code 1
	I0629 20:30:19.957368   11200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408: (1.4472297s)
	W0629 20:30:19.957368   11200 network_create.go:107] failed to create docker network calico-20220629200933-2408 192.168.58.0/24, will retry: subnet is taken
	I0629 20:30:19.986447   11200 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030e4a8] amended:true}} dirty:map[192.168.49.0:0xc00030e4a8 192.168.58.0:0xc000aa4270] misses:1}
	I0629 20:30:19.986447   11200 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:20.016514   11200 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030e4a8] amended:true}} dirty:map[192.168.49.0:0xc00030e4a8 192.168.58.0:0xc000aa4270 192.168.67.0:0xc00030e540] misses:1}
	I0629 20:30:20.016843   11200 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:20.016898   11200 network_create.go:115] attempt to create docker network calico-20220629200933-2408 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 20:30:20.033253   11200 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408
	I0629 20:30:21.656669   11200 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629200933-2408 calico-20220629200933-2408: (1.6234056s)
	I0629 20:30:21.656669   11200 network_create.go:99] docker network calico-20220629200933-2408 192.168.67.0/24 created
	I0629 20:30:21.656669   11200 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220629200933-2408" container
	I0629 20:30:21.672843   11200 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 20:30:23.027779   11200 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.3548184s)
	I0629 20:30:23.035486   11200 cli_runner.go:164] Run: docker volume create calico-20220629200933-2408 --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true
	I0629 20:30:24.855030   11200 cli_runner.go:217] Completed: docker volume create calico-20220629200933-2408 --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true: (1.8194554s)
	I0629 20:30:24.855080   11200 oci.go:103] Successfully created a docker volume calico-20220629200933-2408
	I0629 20:30:24.867559   11200 cli_runner.go:164] Run: docker run --rm --name calico-20220629200933-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --entrypoint /usr/bin/test -v calico-20220629200933-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 20:30:31.850514   11200 cli_runner.go:217] Completed: docker run --rm --name calico-20220629200933-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --entrypoint /usr/bin/test -v calico-20220629200933-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib: (6.9828408s)
	I0629 20:30:31.850590   11200 oci.go:107] Successfully prepared a docker volume calico-20220629200933-2408
	I0629 20:30:31.850590   11200 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:30:31.850590   11200 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 20:30:31.863489   11200 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220629200933-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 20:30:59.506934   11200 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220629200933-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (27.6431254s)
	I0629 20:30:59.507099   11200 kic.go:188] duration metric: took 27.656338 seconds to extract preloaded images to volume
	I0629 20:30:59.517643   11200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:31:02.006801   11200 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4890818s)
	I0629 20:31:02.007356   11200 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:58 SystemTime:2022-06-29 20:31:00.7844516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:31:02.025993   11200 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 20:31:04.472654   11200 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.4466455s)
	I0629 20:31:04.483917   11200 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220629200933-2408 --name calico-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220629200933-2408 --network calico-20220629200933-2408 --ip 192.168.67.2 --volume calico-20220629200933-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 20:31:08.306388   11200 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220629200933-2408 --name calico-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220629200933-2408 --network calico-20220629200933-2408 --ip 192.168.67.2 --volume calico-20220629200933-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e: (3.8219335s)
	I0629 20:31:08.323885   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Running}}
	I0629 20:31:09.819616   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Running}}: (1.4954672s)
	I0629 20:31:09.843966   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:31:11.120595   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.2763252s)
	I0629 20:31:11.137469   11200 cli_runner.go:164] Run: docker exec calico-20220629200933-2408 stat /var/lib/dpkg/alternatives/iptables
	I0629 20:31:12.623194   11200 cli_runner.go:217] Completed: docker exec calico-20220629200933-2408 stat /var/lib/dpkg/alternatives/iptables: (1.4852834s)
	I0629 20:31:12.623248   11200 oci.go:144] the created container "calico-20220629200933-2408" has a running status.
	I0629 20:31:12.623248   11200 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa...
	I0629 20:31:12.979986   11200 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 20:31:14.312839   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:31:15.487747   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.1749008s)
	I0629 20:31:15.501479   11200 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 20:31:15.501479   11200 kic_runner.go:114] Args: [docker exec --privileged calico-20220629200933-2408 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 20:31:16.876587   11200 kic_runner.go:123] Done: [docker exec --privileged calico-20220629200933-2408 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3751s)
	I0629 20:31:16.881689   11200 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa...
	I0629 20:31:17.417877   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:31:18.709731   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.2918464s)
	I0629 20:31:18.709731   11200 machine.go:88] provisioning docker machine ...
	I0629 20:31:18.709731   11200 ubuntu.go:169] provisioning hostname "calico-20220629200933-2408"
	I0629 20:31:18.718369   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:20.026864   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3079337s)
	I0629 20:31:20.039119   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:20.039568   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:20.040125   11200 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220629200933-2408 && echo "calico-20220629200933-2408" | sudo tee /etc/hostname
	I0629 20:31:20.350217   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220629200933-2408
	
	I0629 20:31:20.358064   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:21.722535   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3643657s)
	I0629 20:31:21.730192   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:21.730878   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:21.730878   11200 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220629200933-2408' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220629200933-2408/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220629200933-2408' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 20:31:22.002563   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 20:31:22.002631   11200 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0629 20:31:22.002703   11200 ubuntu.go:177] setting up certificates
	I0629 20:31:22.002703   11200 provision.go:83] configureAuth start
	I0629 20:31:22.012290   11200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408
	I0629 20:31:23.278640   11200 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408: (1.2663416s)
	I0629 20:31:23.278810   11200 provision.go:138] copyHostCerts
	I0629 20:31:23.279401   11200 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0629 20:31:23.279438   11200 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0629 20:31:23.280215   11200 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0629 20:31:23.281592   11200 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0629 20:31:23.281592   11200 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0629 20:31:23.282629   11200 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0629 20:31:23.284053   11200 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0629 20:31:23.284053   11200 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0629 20:31:23.284812   11200 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0629 20:31:23.286031   11200 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220629200933-2408 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220629200933-2408]
	I0629 20:31:23.964618   11200 provision.go:172] copyRemoteCerts
	I0629 20:31:23.975171   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 20:31:23.975405   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:25.315331   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3399178s)
	I0629 20:31:25.315593   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:25.452522   11200 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4773422s)
	I0629 20:31:25.453152   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 20:31:25.510977   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0629 20:31:25.577074   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 20:31:25.654231   11200 provision.go:86] duration metric: configureAuth took 3.651408s
	I0629 20:31:25.654280   11200 ubuntu.go:193] setting minikube options for container-runtime
	I0629 20:31:25.654535   11200 config.go:178] Loaded profile config "calico-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:31:25.666746   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:26.976239   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3094172s)
	I0629 20:31:26.982725   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:26.982996   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:26.982996   11200 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 20:31:27.201950   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 20:31:27.201950   11200 ubuntu.go:71] root file system type: overlay
	I0629 20:31:27.202669   11200 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 20:31:27.217196   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:28.447616   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.2303525s)
	I0629 20:31:28.592221   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:28.592894   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:28.592894   11200 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 20:31:28.904484   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 20:31:28.912351   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:30.186027   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.2736685s)
	I0629 20:31:30.192326   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:30.192952   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:30.192952   11200 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 20:31:31.771427   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 20:31:28.880096000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0629 20:31:31.771502   11200 machine.go:91] provisioned docker machine in 13.0616912s
	I0629 20:31:31.771558   11200 client.go:171] LocalClient.Create took 1m19.1518253s
	I0629 20:31:31.771625   11200 start.go:173] duration metric: libmachine.API.Create for "calico-20220629200933-2408" took 1m19.1519655s
	I0629 20:31:31.771682   11200 start.go:306] post-start starting for "calico-20220629200933-2408" (driver="docker")
	I0629 20:31:31.771682   11200 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 20:31:31.792492   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 20:31:31.797238   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:33.031803   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.2343254s)
	I0629 20:31:33.032492   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:33.199431   11200 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4069308s)
	I0629 20:31:33.217784   11200 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 20:31:33.233977   11200 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 20:31:33.233977   11200 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 20:31:33.233977   11200 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 20:31:33.233977   11200 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 20:31:33.233977   11200 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0629 20:31:33.235756   11200 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0629 20:31:33.236018   11200 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> 24082.pem in /etc/ssl/certs
	I0629 20:31:33.256793   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 20:31:33.313245   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /etc/ssl/certs/24082.pem (1708 bytes)
	I0629 20:31:33.387757   11200 start.go:309] post-start completed in 1.616065s
	I0629 20:31:33.400056   11200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408
	I0629 20:31:34.650433   11200 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408: (1.2502687s)
	I0629 20:31:34.650750   11200 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\config.json ...
	I0629 20:31:34.669785   11200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 20:31:34.685426   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:36.047065   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.361631s)
	I0629 20:31:36.047689   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:36.215741   11200 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.545947s)
	I0629 20:31:36.240921   11200 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 20:31:36.272043   11200 start.go:134] duration metric: createHost completed in 1m23.6558595s
	I0629 20:31:36.272133   11200 start.go:81] releasing machines lock for "calico-20220629200933-2408", held for 1m23.6565563s
	I0629 20:31:36.283925   11200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408
	I0629 20:31:37.691204   11200 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408: (1.4070651s)
	I0629 20:31:37.698182   11200 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 20:31:37.708827   11200 ssh_runner.go:195] Run: systemctl --version
	I0629 20:31:37.709875   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:37.719691   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:39.250474   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.5404028s)
	I0629 20:31:39.250831   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:39.282159   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.5623169s)
	I0629 20:31:39.282159   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:39.553345   11200 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.8551515s)
	I0629 20:31:39.553900   11200 ssh_runner.go:235] Completed: systemctl --version: (1.8445066s)
	I0629 20:31:39.567886   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 20:31:39.595338   11200 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0629 20:31:39.653059   11200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 20:31:39.868760   11200 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 20:31:40.191191   11200 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 20:31:40.229329   11200 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 20:31:40.242386   11200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 20:31:40.283970   11200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 20:31:40.342326   11200 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 20:31:40.540112   11200 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 20:31:40.731245   11200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 20:31:40.914183   11200 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 20:31:41.680594   11200 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 20:31:41.917833   11200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 20:31:42.084059   11200 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 20:31:42.125420   11200 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 20:31:42.139032   11200 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 20:31:42.157735   11200 start.go:468] Will wait 60s for crictl version
	I0629 20:31:42.169249   11200 ssh_runner.go:195] Run: sudo crictl version
	I0629 20:31:42.263175   11200 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 20:31:42.274164   11200 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 20:31:42.393008   11200 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 20:31:42.481186   11200 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 20:31:42.488111   11200 cli_runner.go:164] Run: docker exec -t calico-20220629200933-2408 dig +short host.docker.internal
	I0629 20:31:44.028173   11200 cli_runner.go:217] Completed: docker exec -t calico-20220629200933-2408 dig +short host.docker.internal: (1.5396606s)
	I0629 20:31:44.028255   11200 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 20:31:44.037589   11200 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 20:31:44.053784   11200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 20:31:44.086309   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:45.317233   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.2309165s)
	I0629 20:31:45.317472   11200 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:31:45.325245   11200 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 20:31:45.407462   11200 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 20:31:45.407544   11200 docker.go:533] Images already preloaded, skipping extraction
	I0629 20:31:45.417750   11200 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 20:31:45.492407   11200 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 20:31:45.492407   11200 cache_images.go:84] Images are preloaded, skipping loading
	I0629 20:31:45.504575   11200 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 20:31:45.846534   11200 cni.go:95] Creating CNI manager for "calico"
	I0629 20:31:45.846629   11200 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 20:31:45.846666   11200 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220629200933-2408 NodeName:calico-20220629200933-2408 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 20:31:45.846942   11200 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-20220629200933-2408"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 20:31:45.847202   11200 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-20220629200933-2408 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:calico-20220629200933-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0629 20:31:45.862296   11200 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 20:31:45.963792   11200 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 20:31:45.981038   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 20:31:46.010212   11200 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0629 20:31:46.085961   11200 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 20:31:46.127563   11200 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0629 20:31:46.218170   11200 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 20:31:46.237363   11200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 20:31:46.293262   11200 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408 for IP: 192.168.67.2
	I0629 20:31:46.294206   11200 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0629 20:31:46.294206   11200 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0629 20:31:46.295041   11200 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\client.key
	I0629 20:31:46.295255   11200 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\client.crt with IP's: []
	I0629 20:31:46.812622   11200 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\client.crt ...
	I0629 20:31:46.812622   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\client.crt: {Name:mkb7c3918b0021709a55233f2cef41da043e1386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:31:46.814607   11200 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\client.key ...
	I0629 20:31:46.814670   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\client.key: {Name:mk86bd6c8728c57412bb0bf7e39e54c107f404af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:31:46.816098   11200 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.key.c7fa3a9e
	I0629 20:31:46.816437   11200 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0629 20:31:47.328889   11200 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.crt.c7fa3a9e ...
	I0629 20:31:47.329040   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.crt.c7fa3a9e: {Name:mk71cf4f2237b130ae141d25b6234c68bbded351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:31:47.330569   11200 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.key.c7fa3a9e ...
	I0629 20:31:47.330644   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.key.c7fa3a9e: {Name:mkb45f5d5b185fa5366f5096e339c42417649d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:31:47.331871   11200 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.crt
	I0629 20:31:47.345061   11200 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.key
	I0629 20:31:47.346115   11200 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.key
	I0629 20:31:47.347442   11200 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.crt with IP's: []
	I0629 20:31:47.714119   11200 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.crt ...
	I0629 20:31:47.714119   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.crt: {Name:mk14d48d2d87049a6793d0affad598a7b6fcf4ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:31:47.722255   11200 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.key ...
	I0629 20:31:47.722255   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.key: {Name:mk689c65af631d15a815bf33a576cd2019317f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:31:47.724585   11200 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem (1338 bytes)
	W0629 20:31:47.731851   11200 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408_empty.pem, impossibly tiny 0 bytes
	I0629 20:31:47.731851   11200 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0629 20:31:47.731851   11200 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0629 20:31:47.731851   11200 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0629 20:31:47.732633   11200 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0629 20:31:47.732874   11200 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem (1708 bytes)
	I0629 20:31:47.733297   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 20:31:47.814909   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 20:31:47.910613   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 20:31:48.006627   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 20:31:48.080646   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 20:31:48.141414   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 20:31:48.198937   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 20:31:48.259283   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 20:31:48.332510   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /usr/share/ca-certificates/24082.pem (1708 bytes)
	I0629 20:31:48.391603   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 20:31:48.456812   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\2408.pem --> /usr/share/ca-certificates/2408.pem (1338 bytes)
	I0629 20:31:48.521514   11200 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 20:31:48.573997   11200 ssh_runner.go:195] Run: openssl version
	I0629 20:31:48.602318   11200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24082.pem && ln -fs /usr/share/ca-certificates/24082.pem /etc/ssl/certs/24082.pem"
	I0629 20:31:48.640252   11200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24082.pem
	I0629 20:31:48.660784   11200 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 18:12 /usr/share/ca-certificates/24082.pem
	I0629 20:31:48.672280   11200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24082.pem
	I0629 20:31:48.701919   11200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24082.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 20:31:48.749693   11200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 20:31:48.804818   11200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 20:31:48.818709   11200 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I0629 20:31:48.832086   11200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 20:31:48.878055   11200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 20:31:48.914983   11200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2408.pem && ln -fs /usr/share/ca-certificates/2408.pem /etc/ssl/certs/2408.pem"
	I0629 20:31:48.957097   11200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2408.pem
	I0629 20:31:48.974256   11200 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 18:12 /usr/share/ca-certificates/2408.pem
	I0629 20:31:48.986521   11200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2408.pem
	I0629 20:31:49.013548   11200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2408.pem /etc/ssl/certs/51391683.0"
	I0629 20:31:49.046149   11200 kubeadm.go:395] StartCluster: {Name:calico-20220629200933-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:calico-20220629200933-2408 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:31:49.057067   11200 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 20:31:49.183140   11200 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 20:31:49.237390   11200 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 20:31:49.272451   11200 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 20:31:49.284896   11200 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 20:31:49.317609   11200 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 20:31:49.317609   11200 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 20:32:20.252332   11200 out.go:204]   - Generating certificates and keys ...
	I0629 20:32:20.258731   11200 out.go:204]   - Booting up control plane ...
	I0629 20:32:20.265748   11200 out.go:204]   - Configuring RBAC rules ...
	I0629 20:32:20.269879   11200 cni.go:95] Creating CNI manager for "calico"
	I0629 20:32:20.274133   11200 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0629 20:32:20.278036   11200 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0629 20:32:20.278036   11200 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202050 bytes)
	I0629 20:32:20.584374   11200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0629 20:32:37.330059   11200 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (16.7455837s)
	I0629 20:32:37.330309   11200 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 20:32:37.359760   11200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=calico-20220629200933-2408 minikube.k8s.io/updated_at=2022_06_29T20_32_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:32:37.359760   11200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:32:37.534235   11200 ops.go:34] apiserver oom_adj: -16
	I0629 20:32:38.271206   11200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 20:32:38.530161   11200 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=calico-20220629200933-2408 minikube.k8s.io/updated_at=2022_06_29T20_32_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.1703097s)
	I0629 20:32:38.940650   11200 kubeadm.go:1045] duration metric: took 1.6102183s to wait for elevateKubeSystemPrivileges.
	I0629 20:32:38.940650   11200 kubeadm.go:397] StartCluster complete in 49.8942005s
	I0629 20:32:38.940650   11200 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:32:38.941414   11200 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:32:38.945205   11200 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:32:39.647161   11200 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220629200933-2408" rescaled to 1
	I0629 20:32:39.647335   11200 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:32:39.649782   11200 out.go:177] * Verifying Kubernetes components...
	I0629 20:32:39.647399   11200 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0629 20:32:39.647619   11200 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 20:32:39.647619   11200 config.go:178] Loaded profile config "calico-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:32:39.649899   11200 addons.go:65] Setting storage-provisioner=true in profile "calico-20220629200933-2408"
	I0629 20:32:39.649899   11200 addons.go:65] Setting default-storageclass=true in profile "calico-20220629200933-2408"
	I0629 20:32:39.653715   11200 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220629200933-2408"
	I0629 20:32:39.653645   11200 addons.go:153] Setting addon storage-provisioner=true in "calico-20220629200933-2408"
	W0629 20:32:39.653891   11200 addons.go:162] addon storage-provisioner should already be in state true
	I0629 20:32:39.654084   11200 host.go:66] Checking if "calico-20220629200933-2408" exists ...
	I0629 20:32:39.674109   11200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 20:32:39.681041   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:32:39.682604   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:32:40.543717   11200 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 20:32:40.565540   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:32:41.331800   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.6490017s)
	I0629 20:32:41.335620   11200 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 20:32:41.337894   11200 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 20:32:41.337894   11200 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 20:32:41.360038   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:32:41.454166   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.7730282s)
	I0629 20:32:41.530346   11200 addons.go:153] Setting addon default-storageclass=true in "calico-20220629200933-2408"
	W0629 20:32:41.530346   11200 addons.go:162] addon default-storageclass should already be in state true
	I0629 20:32:41.530904   11200 host.go:66] Checking if "calico-20220629200933-2408" exists ...
	I0629 20:32:41.561496   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:32:42.269108   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.701582s)
	I0629 20:32:42.274510   11200 node_ready.go:35] waiting up to 5m0s for node "calico-20220629200933-2408" to be "Ready" ...
	I0629 20:32:42.428500   11200 node_ready.go:49] node "calico-20220629200933-2408" has status "Ready":"True"
	I0629 20:32:42.428562   11200 node_ready.go:38] duration metric: took 154.0517ms waiting for node "calico-20220629200933-2408" to be "Ready" ...
	I0629 20:32:42.428562   11200 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 20:32:42.476581   11200 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace to be "Ready" ...
	I0629 20:32:43.050371   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.690277s)
	I0629 20:32:43.051037   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:32:43.250233   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.6885653s)
	I0629 20:32:43.250295   11200 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 20:32:43.250295   11200 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 20:32:43.265488   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:32:43.884668   11200 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 20:32:44.727138   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:44.881191   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.6156934s)
	I0629 20:32:44.882126   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:32:45.951167   11200 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 20:32:46.730593   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:48.825864   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:51.032664   11200 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.4888836s)
	I0629 20:32:51.032664   11200 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 20:32:51.227658   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:52.029258   11200 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.0779508s)
	I0629 20:32:52.029359   11200 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.1445202s)
	I0629 20:32:52.032808   11200 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0629 20:32:52.036658   11200 addons.go:414] enableAddons completed in 12.3891838s
	I0629 20:32:53.647715   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:56.231538   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:32:58.725747   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:01.326524   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:03.646595   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:05.732486   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:08.145014   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:10.149417   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:12.158414   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:14.829298   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:17.127970   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:19.167768   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:21.647190   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:24.087245   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:26.147263   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:28.424279   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:30.730128   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:33.150929   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:35.675255   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:38.226808   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:40.231214   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:42.635684   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:44.646849   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:46.828450   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:49.146152   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:51.146857   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:53.230861   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:55.643294   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:33:57.725783   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:00.145026   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:02.145939   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:04.230428   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:06.727565   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:09.164734   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:11.726004   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:14.129371   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:16.138141   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:18.146249   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:20.147371   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:22.643808   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:25.150260   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:27.642141   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:30.149052   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:32.625051   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:34.631123   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:36.675770   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:39.135944   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:41.644952   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:44.090317   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:46.134927   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:48.147828   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:50.629877   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:52.641192   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:54.642407   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:57.128685   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:34:59.649939   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:02.142571   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:04.735490   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:07.142590   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:09.626999   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:11.649071   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:14.087802   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:16.131362   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:18.228816   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:20.574138   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:22.725523   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:25.226003   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:27.644748   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:30.085843   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:32.588203   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:34.628619   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:37.078923   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:39.080434   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:41.089020   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:43.137854   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:45.639192   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:47.642322   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:50.133266   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:52.629173   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:55.139576   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:35:57.642198   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:00.144506   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:02.575784   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:04.626983   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:06.680559   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:09.234534   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:11.644681   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:14.075373   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:16.229656   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:18.646163   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:21.141923   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:23.584773   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:25.641442   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:28.139964   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:30.226628   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:32.639926   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:35.076552   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:37.079357   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:39.085528   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:41.582458   11200 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:42.600796   11200 pod_ready.go:81] duration metric: took 4m0.1227959s waiting for pod "calico-kube-controllers-c44b4545-2rm8n" in "kube-system" namespace to be "Ready" ...
	E0629 20:36:42.600796   11200 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0629 20:36:42.600796   11200 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-fx752" in "kube-system" namespace to be "Ready" ...
	I0629 20:36:44.670888   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:46.673440   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:49.176689   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:51.743152   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:54.229331   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:56.242831   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:36:58.731018   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:00.738471   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:03.254521   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:05.675779   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:07.742245   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:10.157967   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:12.244053   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:14.659670   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:16.730760   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:18.830570   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:21.242669   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:23.669338   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:25.743155   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:28.242571   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:30.730576   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:33.245507   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:35.670165   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:37.726270   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:39.818564   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:42.230638   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:44.250205   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:46.728874   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:48.729430   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:51.237723   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:53.682154   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:56.242812   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:37:58.253978   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:00.752848   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:03.227005   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:05.662301   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:07.728777   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:10.229506   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:12.244366   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:14.730660   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:17.228256   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:19.676170   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:21.742123   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:24.172459   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:26.229652   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:28.245203   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:30.669450   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:32.731400   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:35.176002   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:37.743541   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:40.168875   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:42.178191   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:51.546510   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:53.670359   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:55.743112   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:38:58.250946   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:00.741591   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:03.249528   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:05.729542   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:07.742852   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:10.230230   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:12.241937   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:14.664757   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:16.666233   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:18.827817   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:21.229726   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:23.729415   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:26.232377   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:28.244712   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:30.730300   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:32.739307   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:35.170617   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:37.181240   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:39.665600   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:41.667952   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:43.729738   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:45.744793   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:47.745488   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:50.238848   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:52.669436   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:54.745808   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:57.168686   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:39:59.665322   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:01.668620   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:03.735528   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:05.854468   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:08.250300   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:10.669664   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:12.736581   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:14.742834   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:16.745009   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:19.163500   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:21.241838   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:23.729295   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:26.228548   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:28.246892   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:30.674953   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:32.732327   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:34.829712   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:37.240570   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:39.727655   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:41.729141   11200 pod_ready.go:102] pod "calico-node-fx752" in "kube-system" namespace has status "Ready":"False"
	I0629 20:40:42.747935   11200 pod_ready.go:81] duration metric: took 4m0.1456993s waiting for pod "calico-node-fx752" in "kube-system" namespace to be "Ready" ...
	E0629 20:40:42.747935   11200 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0629 20:40:42.748057   11200 pod_ready.go:38] duration metric: took 8m0.3166636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 20:40:42.752537   11200 out.go:177] 
	W0629 20:40:42.756212   11200 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0629 20:40:42.756212   11200 out.go:239] * 
	* 
	W0629 20:40:42.757860   11200 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 20:40:42.760143   11200 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (645.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (89.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220629202523-2408 --alsologtostderr -v=1
E0629 20:30:40.455697    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:30:46.628642    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:46.641208    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:46.656680    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:46.680107    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:46.733509    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:46.816414    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:46.985513    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:47.307667    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:47.957395    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:49.252018    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20220629202523-2408 --alsologtostderr -v=1: exit status 80 (10.7659368s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20220629202523-2408 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 20:30:39.257631    7076 out.go:296] Setting OutFile to fd 1624 ...
	I0629 20:30:39.357489    7076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:30:39.357489    7076 out.go:309] Setting ErrFile to fd 1788...
	I0629 20:30:39.357489    7076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:30:39.373550    7076 out.go:303] Setting JSON to false
	I0629 20:30:39.373550    7076 mustload.go:65] Loading cluster: newest-cni-20220629202523-2408
	I0629 20:30:39.374237    7076 config.go:178] Loaded profile config "newest-cni-20220629202523-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:30:39.391767    7076 cli_runner.go:164] Run: docker container inspect newest-cni-20220629202523-2408 --format={{.State.Status}}
	I0629 20:30:43.290134    7076 cli_runner.go:217] Completed: docker container inspect newest-cni-20220629202523-2408 --format={{.State.Status}}: (3.898146s)
	I0629 20:30:43.290211    7076 host.go:66] Checking if "newest-cni-20220629202523-2408" exists ...
	I0629 20:30:43.304085    7076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629202523-2408
	I0629 20:30:44.762646    7076 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629202523-2408: (1.4584304s)
	I0629 20:30:44.765185    7076 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks
:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/14420/minikube-v1.26.0-1656448385-14420-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.26.0-1656448385-14420/minikube-v1.26.0-1656448385-14420-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.26.0-1656448385-14420-amd64.iso https://storage.googleapis.com/minikube-builds/iso/14420/minikube-v1.26.0-1656448385-14420.iso https://github.com/kubernetes/minikube/releases/download/v1.26.0-1656448385-14420/minikube-v1.26.0-1656448385-14420.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.
com/minikube/iso/minikube-v1.26.0-1656448385-14420.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube8:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20220629202523-2408 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-use
r:root subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0629 20:30:44.779583    7076 out.go:177] * Pausing node newest-cni-20220629202523-2408 ... 
	I0629 20:30:44.781656    7076 host.go:66] Checking if "newest-cni-20220629202523-2408" exists ...
	I0629 20:30:44.794879    7076 ssh_runner.go:195] Run: systemctl --version
	I0629 20:30:44.799666    7076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629202523-2408
	I0629 20:30:46.299655    7076 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629202523-2408: (1.499927s)
	I0629 20:30:46.299862    7076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57670 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-20220629202523-2408\id_rsa Username:docker}
	I0629 20:30:46.729993    7076 ssh_runner.go:235] Completed: systemctl --version: (1.9351025s)
	I0629 20:30:46.747660    7076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 20:30:46.785163    7076 pause.go:50] kubelet running: true
	I0629 20:30:46.799712    7076 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0629 20:30:47.833429    7076 ssh_runner.go:235] Completed: sudo systemctl disable --now kubelet: (1.033603s)
	I0629 20:30:47.854702    7076 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0629 20:30:48.224383    7076 docker.go:451] Pausing containers: [f5b59333d6ea 207c2317d2fa b0056db737a2 b60fb563bb88 95e71a7ed45f cf162ac653a9 a567d6fb06e7 684b999ec120 52b391af30b5 ef8af2f0bf85 cf3f3c9cbb06 f494a82fa430 527c1aafd76a]
	I0629 20:30:48.238426    7076 ssh_runner.go:195] Run: docker pause f5b59333d6ea 207c2317d2fa b0056db737a2 b60fb563bb88 95e71a7ed45f cf162ac653a9 a567d6fb06e7 684b999ec120 52b391af30b5 ef8af2f0bf85 cf3f3c9cbb06 f494a82fa430 527c1aafd76a
	I0629 20:30:48.998436    7076 out.go:177] 
	W0629 20:30:49.001089    7076 out.go:239] X Exiting due to GUEST_PAUSE: docker: docker pause f5b59333d6ea 207c2317d2fa b0056db737a2 b60fb563bb88 95e71a7ed45f cf162ac653a9 a567d6fb06e7 684b999ec120 52b391af30b5 ef8af2f0bf85 cf3f3c9cbb06 f494a82fa430 527c1aafd76a: Process exited with status 1
	stdout:
	207c2317d2fa
	b0056db737a2
	b60fb563bb88
	95e71a7ed45f
	cf162ac653a9
	a567d6fb06e7
	684b999ec120
	52b391af30b5
	ef8af2f0bf85
	cf3f3c9cbb06
	f494a82fa430
	527c1aafd76a
	
	stderr:
	Error response from daemon: Container f5b59333d6ead5a94e90c151e93bfdd67d94f67b7e23dc6d12062c79ddee11a9 is not running
	
	X Exiting due to GUEST_PAUSE: docker: docker pause f5b59333d6ea 207c2317d2fa b0056db737a2 b60fb563bb88 95e71a7ed45f cf162ac653a9 a567d6fb06e7 684b999ec120 52b391af30b5 ef8af2f0bf85 cf3f3c9cbb06 f494a82fa430 527c1aafd76a: Process exited with status 1
	stdout:
	207c2317d2fa
	b0056db737a2
	b60fb563bb88
	95e71a7ed45f
	cf162ac653a9
	a567d6fb06e7
	684b999ec120
	52b391af30b5
	ef8af2f0bf85
	cf3f3c9cbb06
	f494a82fa430
	527c1aafd76a
	
	stderr:
	Error response from daemon: Container f5b59333d6ead5a94e90c151e93bfdd67d94f67b7e23dc6d12062c79ddee11a9 is not running
	
	W0629 20:30:49.001210    7076 out.go:239] * 
	* 
	W0629 20:30:49.695807    7076 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_20.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_20.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 20:30:49.699162    7076 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p newest-cni-20220629202523-2408 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220629202523-2408
helpers_test.go:231: (dbg) Done: docker inspect newest-cni-20220629202523-2408: (1.4502442s)
helpers_test.go:235: (dbg) docker inspect newest-cni-20220629202523-2408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066",
	        "Created": "2022-06-29T20:26:28.4280678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350986,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T20:29:05.8457271Z",
	            "FinishedAt": "2022-06-29T20:28:42.0111984Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/hostname",
	        "HostsPath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/hosts",
	        "LogPath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066-json.log",
	        "Name": "/newest-cni-20220629202523-2408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220629202523-2408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220629202523-2408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220629202523-2408",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220629202523-2408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220629202523-2408",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220629202523-2408",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220629202523-2408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1814467cb2af6ef0c2cb5d8841df4b4aaec337be4e74f3cdbd2bdbc57eeb39c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57671"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57672"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57673"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57674"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1814467cb2a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220629202523-2408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ebf2518a0e69",
	                        "newest-cni-20220629202523-2408"
	                    ],
	                    "NetworkID": "e257354b1be03d8f64a2e06186e9fb8571000615e763dac00ac14f110afaf094",
	                    "EndpointID": "249769b8f3b9ab1af63129abf3499397d9cc5e0ff9e86c870167cd65dd189175",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408
E0629 20:30:51.817534    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:30:56.949748    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408: exit status 2 (8.2343328s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-20220629202523-2408 logs -n 25
E0629 20:31:07.201779    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-20220629202523-2408 logs -n 25: (21.1649026s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	| start   | -p newest-cni-20220629202523-2408 --memory=2200            | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:28 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.24.2               |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	| start   | -p auto-20220629200908-2408                                | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:28 GMT |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr                                          |          |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:26 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	| start   | -p kindnet-20220629200924-2408                             | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:26 GMT | 29 Jun 22 20:29 GMT |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr                                          |          |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |          |                   |         |                     |                     |
	|         | --cni=kindnet --driver=docker                              |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:26 GMT | 29 Jun 22 20:27 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:27 GMT | 29 Jun 22 20:27 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	| start   | -p cilium-20220629200933-2408                              | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:27 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |          |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                             |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:28 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |                   |         |                     |                     |
	| stop    | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:28 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:28 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |                   |         |                     |                     |
	| start   | -p newest-cni-20220629202523-2408 --memory=2200            | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:30 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.24.2               |          |                   |         |                     |                     |
	| ssh     | -p auto-20220629200908-2408                                | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:29 GMT |
	|         | pgrep -a kubelet                                           |          |                   |         |                     |                     |
	| ssh     | -p kindnet-20220629200924-2408                             | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:29 GMT | 29 Jun 22 20:29 GMT |
	|         | pgrep -a kubelet                                           |          |                   |         |                     |                     |
	| delete  | -p auto-20220629200908-2408                                | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:29 GMT | 29 Jun 22 20:29 GMT |
	| start   | -p calico-20220629200933-2408                              | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:29 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |          |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=calico                             |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| delete  | -p kindnet-20220629200924-2408                             | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT | 29 Jun 22 20:30 GMT |
	| start   | -p false-20220629200924-2408                               | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |          |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=false                              |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT | 29 Jun 22 20:30 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT |                     |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 20:30:28
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 20:30:28.756094   10908 out.go:296] Setting OutFile to fd 1644 ...
	I0629 20:30:28.812372   10908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:30:28.812372   10908 out.go:309] Setting ErrFile to fd 1676...
	I0629 20:30:28.812372   10908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:30:28.843293   10908 out.go:303] Setting JSON to false
	I0629 20:30:28.846260   10908 start.go:115] hostinfo: {"hostname":"minikube8","uptime":27191,"bootTime":1656507437,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 20:30:28.846432   10908 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 20:30:28.851790   10908 out.go:177] * [false-20220629200924-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 20:30:28.856005   10908 notify.go:193] Checking for updates...
	I0629 20:30:28.869788   10908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:30:28.877765   10908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 20:30:28.889217   10908 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 20:30:28.895141   10908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 20:30:31.850514   11200 cli_runner.go:217] Completed: docker run --rm --name calico-20220629200933-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --entrypoint /usr/bin/test -v calico-20220629200933-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib: (6.9828408s)
	I0629 20:30:31.850590   11200 oci.go:107] Successfully prepared a docker volume calico-20220629200933-2408
	I0629 20:30:31.850590   11200 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:30:31.850590   11200 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 20:30:31.863489   11200 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220629200933-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 20:30:28.900508   10908 config.go:178] Loaded profile config "calico-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:30:28.901036   10908 config.go:178] Loaded profile config "cilium-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:30:28.901211   10908 config.go:178] Loaded profile config "newest-cni-20220629202523-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:30:28.901739   10908 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 20:30:32.845171   10908 docker.go:137] docker version: linux-20.10.16
	I0629 20:30:32.862480   10908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:30:29.475080    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:31.930159    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:35.577726   10908 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.7152297s)
	I0629 20:30:35.578341   10908 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:82 OomKillDisable:true NGoroutines:77 SystemTime:2022-06-29 20:30:34.2387692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:30:35.583254   10908 out.go:177] * Using the docker driver based on user configuration
	I0629 20:30:35.586139   10908 start.go:284] selected driver: docker
	I0629 20:30:35.586139   10908 start.go:808] validating driver "docker" against <nil>
	I0629 20:30:35.586679   10908 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 20:30:35.665624   10908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:30:38.246334   10908 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5806942s)
	I0629 20:30:38.246664   10908 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:81 OomKillDisable:true NGoroutines:70 SystemTime:2022-06-29 20:30:37.0141853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:30:38.246664   10908 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 20:30:38.249085   10908 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 20:30:38.252998   10908 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 20:30:38.257199   10908 cni.go:95] Creating CNI manager for "false"
	I0629 20:30:38.257294   10908 start_flags.go:310] config:
	{Name:false-20220629200924-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:false-20220629200924-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:30:38.270809   10908 out.go:177] * Starting control plane node false-20220629200924-2408 in cluster false-20220629200924-2408
	I0629 20:30:38.309826   10908 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 20:30:38.340692   10908 out.go:177] * Pulling base image ...
	I0629 20:30:38.343731   10908 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:30:38.343891   10908 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 20:30:38.344092   10908 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 20:30:38.344206   10908 cache.go:57] Caching tarball of preloaded images
	I0629 20:30:38.344822   10908 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 20:30:38.345047   10908 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 20:30:38.345047   10908 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\config.json ...
	I0629 20:30:38.345745   10908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\config.json: {Name:mkff02bed303d85f7f66d86bc6e26657facaf7f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:30:34.347638    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:36.349254    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:38.425395    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:39.763261   10908 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 20:30:39.763261   10908 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 20:30:39.763261   10908 cache.go:208] Successfully downloaded all kic artifacts
	I0629 20:30:39.763261   10908 start.go:352] acquiring machines lock for false-20220629200924-2408: {Name:mk4e7ee60eadc570bee66017265e0ca36038179d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 20:30:39.763261   10908 start.go:356] acquired machines lock for "false-20220629200924-2408" in 0s
	I0629 20:30:39.764046   10908 start.go:91] Provisioning new machine with config: &{Name:false-20220629200924-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:false-20220629200924-2408 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:30:39.764046   10908 start.go:131] createHost starting for "" (driver="docker")
	I0629 20:30:39.775597   10908 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0629 20:30:39.775597   10908 start.go:165] libmachine.API.Create for "false-20220629200924-2408" (driver="docker")
	I0629 20:30:39.775597   10908 client.go:168] LocalClient.Create starting
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Decoding PEM data...
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Parsing certificate...
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0629 20:30:39.777760   10908 main.go:134] libmachine: Decoding PEM data...
	I0629 20:30:39.777829   10908 main.go:134] libmachine: Parsing certificate...
	I0629 20:30:39.788878   10908 cli_runner.go:164] Run: docker network inspect false-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 20:30:41.211116   10908 cli_runner.go:211] docker network inspect false-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 20:30:41.211116   10908 cli_runner.go:217] Completed: docker network inspect false-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.4219556s)
	I0629 20:30:41.220685   10908 network_create.go:272] running [docker network inspect false-20220629200924-2408] to gather additional debugging logs...
	I0629 20:30:41.220749   10908 cli_runner.go:164] Run: docker network inspect false-20220629200924-2408
	W0629 20:30:42.530669   10908 cli_runner.go:211] docker network inspect false-20220629200924-2408 returned with exit code 1
	I0629 20:30:42.530924   10908 cli_runner.go:217] Completed: docker network inspect false-20220629200924-2408: (1.3099118s)
	I0629 20:30:42.530924   10908 network_create.go:275] error running [docker network inspect false-20220629200924-2408]: docker network inspect false-20220629200924-2408: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220629200924-2408
	I0629 20:30:42.531121   10908 network_create.go:277] output of [docker network inspect false-20220629200924-2408]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220629200924-2408
	
	** /stderr **
	I0629 20:30:42.545398   10908 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 20:30:40.426789    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:42.728994    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:43.940320   10908 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.394913s)
	I0629 20:30:43.971922   10908 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004ee248] misses:0}
	I0629 20:30:43.971922   10908 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:43.971922   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 20:30:43.986113   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:45.464727   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:45.464727   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4784873s)
	W0629 20:30:45.464727   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.49.0/24, will retry: subnet is taken
	I0629 20:30:45.490978   10908 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:false}} dirty:map[] misses:0}
	I0629 20:30:45.491057   10908 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:45.516438   10908 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240] misses:0}
	I0629 20:30:45.516438   10908 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:45.516438   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 20:30:45.529832   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:47.026619   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:47.026703   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4936356s)
	W0629 20:30:47.026703   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.58.0/24, will retry: subnet is taken
	I0629 20:30:47.058714   10908 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240] misses:1}
	I0629 20:30:47.058864   10908 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:47.075965   10908 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00] misses:1}
	I0629 20:30:47.078710   10908 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:47.078710   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 20:30:47.085627   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:48.530934   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:48.531256   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4443639s)
	W0629 20:30:48.531327   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.67.0/24, will retry: subnet is taken
	I0629 20:30:48.562921   10908 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00] misses:2}
	I0629 20:30:48.562921   10908 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:48.580631   10908 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00 192.168.76.0:0xc0007082e0] misses:2}
	I0629 20:30:48.580631   10908 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:48.580631   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0629 20:30:48.586171   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	I0629 20:30:44.927214    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:46.928321    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	W0629 20:30:50.019582   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:50.019802   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4332202s)
	W0629 20:30:50.019909   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.76.0/24, will retry: subnet is taken
	I0629 20:30:50.069909   10908 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00 192.168.76.0:0xc0007082e0] misses:3}
	I0629 20:30:50.069909   10908 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:50.100340   10908 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00 192.168.76.0:0xc0007082e0 192.168.85.0:0xc0004ee2e0] misses:3}
	I0629 20:30:50.100340   10908 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:50.100340   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0629 20:30:50.107951   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:51.446516   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:51.446599   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.3385565s)
	W0629 20:30:51.446599   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.85.0/24, will retry: subnet is taken
	W0629 20:30:51.446599   10908 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create docker network false-20220629200924-2408: subnet is taken
	I0629 20:30:51.465379   10908 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 20:30:52.728102   10908 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2626696s)
	I0629 20:30:52.743536   10908 cli_runner.go:164] Run: docker volume create false-20220629200924-2408 --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --label created_by.minikube.sigs.k8s.io=true
	I0629 20:30:49.821935    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:51.850078    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:53.971974   10908 cli_runner.go:217] Completed: docker volume create false-20220629200924-2408 --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --label created_by.minikube.sigs.k8s.io=true: (1.2282819s)
	I0629 20:30:53.972080   10908 oci.go:103] Successfully created a docker volume false-20220629200924-2408
	I0629 20:30:53.980589   10908 cli_runner.go:164] Run: docker run --rm --name false-20220629200924-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --entrypoint /usr/bin/test -v false-20220629200924-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 20:30:53.928324    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:57.045666    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 20:29:06 UTC, end at Wed 2022-06-29 20:31:06 UTC. --
	Jun 29 20:29:27 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:27.497245100Z" level=info msg="API listen on [::]:2376"
	Jun 29 20:29:27 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:27.503198100Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 29 20:29:59 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:59.631716700Z" level=info msg="ignoring event" container=cce9f86aef65e4f988d13c57b994cad7b072a10647880fc3857556b47e645a02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:29:59 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:59.950897900Z" level=info msg="ignoring event" container=76b17f247f80f62522d5084d4c1eaa471bad54ddc72cf175aa16d18d845f20d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:07 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:07.524340400Z" level=info msg="ignoring event" container=f105824bbce3889e723d801668bef5e712f2c59229bcdff27e2a92e6877d2772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:08 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:08.330934200Z" level=info msg="ignoring event" container=5a10cb291023324edad7facdbad74bd9d737cbdb1bc18909e39a6c0c7f9878aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:16 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:16.824864700Z" level=info msg="ignoring event" container=f45f943da9d6c298a0ef2943ea330f21896d24cb7412dbe8da3b438a4f34fdb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:16 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:16.928083800Z" level=info msg="ignoring event" container=68f93675da04344478bc9a35c46469e96c83b11c324fd7b7576ef1daed0c6d1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:20 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:20.520742400Z" level=info msg="ignoring event" container=c080efdf686acd01a1ea50216ae174bb64c63605b40e0287a256618d3ca51495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.526975300Z" level=info msg="ignoring event" container=9b62ae870231558321fc566c6b10479e3deca7ddef177cc1ede614f3aad57cb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.542689800Z" level=info msg="ignoring event" container=ca5ae665fa7045be385d0223f6e8e67235d04e70d35a4011dac566a258b0bab2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.727242400Z" level=info msg="ignoring event" container=7d333e159bd607ffa6ed7929406f00db647f0f5f7a9f43f0b2927065387e3514 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.750117000Z" level=info msg="ignoring event" container=e7e43dd850552c7c559946ac4e621f015304354cd5c6dda52571156ae8e1dab7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:34 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:34.820991500Z" level=info msg="ignoring event" container=40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:35 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:35.050682000Z" level=info msg="ignoring event" container=fe177266ce7dff86c0db664f4ecaee35e4d2a9146d61aa2d5d41574425756c87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:37 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:37.921152100Z" level=info msg="ignoring event" container=4425f8d84312a4e45b81abf13c4d1ac76e093e8ab424b5c56db7a1eee968c65e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:38 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:38.622365500Z" level=info msg="ignoring event" container=b6ef0c8d57ccedf445de7378fdd55254c47e72b9c570ed83082ea2aecb35494f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:40 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:40.171205200Z" level=info msg="ignoring event" container=fb99c265aa5daa267046ae779b84e8a948562b4603a470707e7a09b3102c3c6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:42 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:42.572632300Z" level=info msg="ignoring event" container=ddbed7307e1d0f14e286b61f5a3cba8e5bf30e23b6b4a2404f49b41c31b9d5de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:44 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:44.421807200Z" level=info msg="ignoring event" container=7c1e42b0c24620b3b634d8e9f6b3fee4bce28801ca6306b9624770a0a959c278 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:44 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:44.421919800Z" level=info msg="ignoring event" container=cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:47 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:47.851811700Z" level=info msg="ignoring event" container=f5b59333d6ead5a94e90c151e93bfdd67d94f67b7e23dc6d12062c79ddee11a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:49 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:49.343764700Z" level=info msg="ignoring event" container=f77b153c88c836607b0266447051f5d8a6cbb32d85aca9a9b6731365a7820fcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:50 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:50.541929500Z" level=info msg="ignoring event" container=8b82b8657b071e7b567157122ac4b53f3023343f2b6762c9092323318028fe75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:50 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:50.702277400Z" level=info msg="ignoring event" container=f7b7c8ab5fca80d83afa730e9a32cdc54ac80543a07dfbd998e506c5d56c5ab8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	207c2317d2fa9       6e38f40d628db       36 seconds ago       Running             storage-provisioner       2                   b60fb563bb880
	c080efdf686ac       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   b60fb563bb880
	b0056db737a28       a634548d10b03       About a minute ago   Running             kube-proxy                1                   95e71a7ed45f4
	cf162ac653a9a       aebe758cef4cd       About a minute ago   Running             etcd                      1                   ef8af2f0bf859
	a567d6fb06e72       d3377ffb7177c       About a minute ago   Running             kube-apiserver            1                   cf3f3c9cbb06d
	684b999ec120f       34cdf99b1bb3b       About a minute ago   Running             kube-controller-manager   1                   f494a82fa4306
	52b391af30b50       5d725196c1f47       About a minute ago   Running             kube-scheduler            1                   527c1aafd76a0
	f051cef64fd54       a634548d10b03       2 minutes ago        Exited              kube-proxy                0                   95d1ab23bd51e
	f627e4c91d25c       5d725196c1f47       3 minutes ago        Exited              kube-scheduler            0                   a278411db9899
	299a6eb7a0743       34cdf99b1bb3b       3 minutes ago        Exited              kube-controller-manager   0                   c4f6e50e1cf1c
	451a01d3d1ae2       d3377ffb7177c       3 minutes ago        Exited              kube-apiserver            0                   343083e4a54fc
	7290718aacee2       aebe758cef4cd       3 minutes ago        Exited              etcd                      0                   cd4a2e65b939e
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jun29 20:01] WSL2: Performing memory compaction.
	[Jun29 20:06] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000015] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000100] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000027] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +21.104341] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.081691] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000052] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:07] WSL2: Performing memory compaction.
	[Jun29 20:08] WSL2: Performing memory compaction.
	[Jun29 20:09] WSL2: Performing memory compaction.
	[Jun29 20:11] WSL2: Performing memory compaction.
	[Jun29 20:12] WSL2: Performing memory compaction.
	[Jun29 20:14] WSL2: Performing memory compaction.
	[Jun29 20:15] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:16] WSL2: Performing memory compaction.
	[Jun29 20:25] WSL2: Performing memory compaction.
	[Jun29 20:26] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [7290718aacee] <==
	* {"level":"warn","ts":"2022-06-29T20:28:14.041Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.3014817s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:28:14.041Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.9638ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638329353935374035 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20220629202523-2408\" mod_revision:305 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20220629202523-2408\" value_size:540 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20220629202523-2408\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-29T20:28:14.041Z","caller":"traceutil/trace.go:171","msg":"trace[51509022] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"119.584ms","start":"2022-06-29T20:28:13.921Z","end":"2022-06-29T20:28:14.041Z","steps":["trace[51509022] 'compare'  (duration: 118.5137ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:28:14.041Z","caller":"traceutil/trace.go:171","msg":"trace[1543502014] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:371; }","duration":"1.301803s","start":"2022-06-29T20:28:12.739Z","end":"2022-06-29T20:28:14.041Z","steps":["trace[1543502014] 'agreement among raft nodes before linearized reading'  (duration: 894.0009ms)","trace[1543502014] 'range keys from in-memory index tree'  (duration: 407.3965ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.042Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:28:12.739Z","time spent":"1.3023346s","remote":"127.0.0.1:33796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-06-29T20:28:14.346Z","caller":"traceutil/trace.go:171","msg":"trace[1229200613] linearizableReadLoop","detail":"{readStateIndex:386; appliedIndex:386; }","duration":"294.9095ms","start":"2022-06-29T20:28:14.051Z","end":"2022-06-29T20:28:14.346Z","steps":["trace[1229200613] 'read index received'  (duration: 294.8972ms)","trace[1229200613] 'applied index is now lower than readState.Index'  (duration: 8.7µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"328.3239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"329.3465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"}
	{"level":"info","ts":"2022-06-29T20:28:14.380Z","caller":"traceutil/trace.go:171","msg":"trace[1580264904] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:372; }","duration":"328.5382ms","start":"2022-06-29T20:28:14.052Z","end":"2022-06-29T20:28:14.380Z","steps":["trace[1580264904] 'agreement among raft nodes before linearized reading'  (duration: 294.4315ms)","trace[1580264904] 'range keys from in-memory index tree'  (duration: 33.8672ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T20:28:14.380Z","caller":"traceutil/trace.go:171","msg":"trace[594798499] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:372; }","duration":"329.4019ms","start":"2022-06-29T20:28:14.051Z","end":"2022-06-29T20:28:14.380Z","steps":["trace[594798499] 'agreement among raft nodes before linearized reading'  (duration: 295.045ms)","trace[594798499] 'range keys from in-memory index tree'  (duration: 34.2581ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:28:14.052Z","time spent":"328.6185ms","remote":"127.0.0.1:33796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:28:14.051Z","time spent":"329.4587ms","remote":"127.0.0.1:33784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":376,"request content":"key:\"/registry/namespaces/kube-system\" "}
	{"level":"info","ts":"2022-06-29T20:28:14.537Z","caller":"traceutil/trace.go:171","msg":"trace[636619072] linearizableReadLoop","detail":"{readStateIndex:387; appliedIndex:387; }","duration":"141.4121ms","start":"2022-06-29T20:28:14.395Z","end":"2022-06-29T20:28:14.537Z","steps":["trace[636619072] 'read index received'  (duration: 141.3939ms)","trace[636619072] 'applied index is now lower than readState.Index'  (duration: 13.1µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.671Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.9666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:28:14.671Z","caller":"traceutil/trace.go:171","msg":"trace[1653839816] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:373; }","duration":"276.2237ms","start":"2022-06-29T20:28:14.395Z","end":"2022-06-29T20:28:14.671Z","steps":["trace[1653839816] 'agreement among raft nodes before linearized reading'  (duration: 141.6783ms)","trace[1653839816] 'range keys from in-memory index tree'  (duration: 134.256ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:15.918Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.2265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:28:15.918Z","caller":"traceutil/trace.go:171","msg":"trace[1014052489] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:380; }","duration":"176.5514ms","start":"2022-06-29T20:28:15.742Z","end":"2022-06-29T20:28:15.918Z","steps":["trace[1014052489] 'agreement among raft nodes before linearized reading'  (duration: 78.1762ms)","trace[1014052489] 'range keys from in-memory index tree'  (duration: 97.9305ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T20:28:31.023Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T20:28:31.024Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220629202523-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/06/29 20:28:31 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 20:28:31 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T20:28:31.146Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-06-29T20:28:31.262Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T20:28:31.317Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T20:28:31.318Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220629202523-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [cf162ac653a9] <==
	* {"level":"warn","ts":"2022-06-29T20:30:29.020Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.0759882s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2022-06-29T20:30:29.020Z","caller":"traceutil/trace.go:171","msg":"trace[2054658775] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.0769455s","start":"2022-06-29T20:30:26.943Z","end":"2022-06-29T20:30:29.020Z","steps":["trace[2054658775] 'agreement among raft nodes before linearized reading'  (duration: 2.0759199s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.021Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:26.943Z","time spent":"2.0772227s","remote":"127.0.0.1:37582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	WARNING: 2022/06/29 20:30:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-06-29T20:30:29.447Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"946.6014ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638329353967810021 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" value_size:675 lease:6414957317113033687 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-06-29T20:30:29.448Z","caller":"traceutil/trace.go:171","msg":"trace[1648148282] linearizableReadLoop","detail":"{readStateIndex:648; appliedIndex:647; }","duration":"4.5126934s","start":"2022-06-29T20:30:24.935Z","end":"2022-06-29T20:30:29.448Z","steps":["trace[1648148282] 'read index received'  (duration: 3.5656078s)","trace[1648148282] 'applied index is now lower than readState.Index'  (duration: 946.9977ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T20:30:29.448Z","caller":"traceutil/trace.go:171","msg":"trace[1422741100] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"2.4985543s","start":"2022-06-29T20:30:26.949Z","end":"2022-06-29T20:30:29.448Z","steps":["trace[1422741100] 'process raft request'  (duration: 1.5514023s)","trace[1422741100] 'compare'  (duration: 946.0539ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:30:29.448Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:26.949Z","time spent":"2.4986272s","remote":"127.0.0.1:37546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":784,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" value_size:675 lease:6414957317113033687 >> failure:<>"}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.5676222s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:30:29.485Z","caller":"traceutil/trace.go:171","msg":"trace[121820227] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"1.5678116s","start":"2022-06-29T20:30:27.917Z","end":"2022-06-29T20:30:29.485Z","steps":["trace[121820227] 'agreement among raft nodes before linearized reading'  (duration: 1.5675876s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:27.917Z","time spent":"1.567903s","remote":"127.0.0.1:37582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.8625835s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"558.072ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:30:29.485Z","caller":"traceutil/trace.go:171","msg":"trace[59355735] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:607; }","duration":"558.7302ms","start":"2022-06-29T20:30:28.927Z","end":"2022-06-29T20:30:29.485Z","steps":["trace[59355735] 'agreement among raft nodes before linearized reading'  (duration: 558.0743ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:30:29.486Z","caller":"traceutil/trace.go:171","msg":"trace[1404020785] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:607; }","duration":"1.862976s","start":"2022-06-29T20:30:27.623Z","end":"2022-06-29T20:30:29.486Z","steps":["trace[1404020785] 'agreement among raft nodes before linearized reading'  (duration: 1.8618991s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:27.623Z","time spent":"1.8630568s","remote":"127.0.0.1:37560","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":366,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"454.9915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:30:29.486Z","caller":"traceutil/trace.go:171","msg":"trace[193731173] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"456.3624ms","start":"2022-06-29T20:30:29.030Z","end":"2022-06-29T20:30:29.486Z","steps":["trace[193731173] 'agreement among raft nodes before linearized reading'  (duration: 454.5962ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:29.030Z","time spent":"456.4366ms","remote":"127.0.0.1:37582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-29T20:30:45.547Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.1629ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638329353967810109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bzssg.16fd31787edbb028\" mod_revision:629 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bzssg.16fd31787edbb028\" value_size:656 lease:6414957317113033687 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bzssg.16fd31787edbb028\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-29T20:30:45.551Z","caller":"traceutil/trace.go:171","msg":"trace[1321139070] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"207.0658ms","start":"2022-06-29T20:30:45.343Z","end":"2022-06-29T20:30:45.550Z","steps":["trace[1321139070] 'process raft request'  (duration: 84.7525ms)","trace[1321139070] 'compare'  (duration: 113.1244ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:30:47.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"132.4763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"warn","ts":"2022-06-29T20:30:47.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.5506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1132"}
	{"level":"info","ts":"2022-06-29T20:30:47.854Z","caller":"traceutil/trace.go:171","msg":"trace[1512921429] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:642; }","duration":"132.5712ms","start":"2022-06-29T20:30:47.721Z","end":"2022-06-29T20:30:47.853Z","steps":["trace[1512921429] 'range keys from in-memory index tree'  (duration: 132.2496ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:30:47.854Z","caller":"traceutil/trace.go:171","msg":"trace[217357278] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:642; }","duration":"122.7726ms","start":"2022-06-29T20:30:47.731Z","end":"2022-06-29T20:30:47.854Z","steps":["trace[217357278] 'range keys from in-memory index tree'  (duration: 122.38ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:31:18 up  2:39,  0 users,  load average: 10.60, 8.91, 7.10
	Linux newest-cni-20220629202523-2408 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [451a01d3d1ae] <==
	* W0629 20:28:40.450960       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.468179       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.479733       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.486430       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.492784       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.519377       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.557397       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.574481       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.670187       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.724123       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.757110       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.776558       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.802936       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.813950       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.826892       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.828750       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.845695       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.929419       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.958747       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.968117       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.969610       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.981189       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:41.025905       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:41.069161       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:41.094014       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [a567d6fb06e7] <==
	* Trace[909673204]: [1.1853298s] [1.1853298s] END
	I0629 20:29:53.822720       1 trace.go:205] Trace[1286452990]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/metrics-server/token,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:d32bd74d-99e6-4fd9-8a9d-1c8ab5bd420b,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:29:52.636) (total time: 1185ms):
	Trace[1286452990]: ---"Object stored in database" 1185ms (20:29:53.822)
	Trace[1286452990]: [1.1858741s] [1.1858741s] END
	I0629 20:29:59.425250       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 20:29:59.639556       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 20:30:00.120722       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 20:30:00.760513       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 20:30:01.121222       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0629 20:30:01.425417       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0629 20:30:14.132488       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0629 20:30:14.132489       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0629 20:30:16.437131       1 controller.go:611] quota admission added evaluator for: namespaces
	I0629 20:30:17.225297       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 20:30:18.779482       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.4.126]
	I0629 20:30:18.926963       1 controller.go:611] quota admission added evaluator for: endpoints
	I0629 20:30:19.320168       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.5.7]
	{"level":"warn","ts":"2022-06-29T20:30:26.934Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0027c0c40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2022-06-29T20:30:28.944Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0027c0c40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I0629 20:30:29.450864       1 trace.go:205] Trace[1590373146]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:eca5ed3f-b402-4fdb-a625-54c979b4d2dd,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:30:26.947) (total time: 2502ms):
	Trace[1590373146]: ---"Object stored in database" 2502ms (20:30:29.450)
	Trace[1590373146]: [2.5028662s] [2.5028662s] END
	I0629 20:30:29.488164       1 trace.go:205] Trace[1217702110]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:1eb26c05-0cd0-4713-9378-9ff48218abff,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Jun-2022 20:30:27.622) (total time: 1865ms):
	Trace[1217702110]: ---"About to write a response" 1865ms (20:30:29.487)
	Trace[1217702110]: [1.8658274s] [1.8658274s] END
	
	* 
	* ==> kube-controller-manager [299a6eb7a074] <==
	* I0629 20:28:03.818878       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0629 20:28:03.818877       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0629 20:28:03.819046       1 event.go:294] "Event occurred" object="newest-cni-20220629202523-2408" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220629202523-2408 event: Registered Node newest-cni-20220629202523-2408 in Controller"
	I0629 20:28:03.818405       1 shared_informer.go:262] Caches are synced for disruption
	I0629 20:28:03.819463       1 disruption.go:371] Sending events to api server.
	I0629 20:28:03.819494       1 shared_informer.go:262] Caches are synced for endpoint
	I0629 20:28:03.824403       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0629 20:28:03.824888       1 shared_informer.go:262] Caches are synced for GC
	I0629 20:28:03.826628       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 20:28:03.829101       1 shared_informer.go:262] Caches are synced for PVC protection
	I0629 20:28:04.218532       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:28:04.218710       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 20:28:04.218600       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:28:04.420493       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0629 20:28:04.542212       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rcjck"
	I0629 20:28:04.635181       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jjwzr"
	I0629 20:28:04.820001       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hnxmm"
	I0629 20:28:05.030364       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	E0629 20:28:05.126131       1 replica_set.go:550] sync "kube-system/coredns-6d4b75cb6d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-6d4b75cb6d": the object has been modified; please apply your changes to the latest version and try again
	I0629 20:28:07.055087       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-jjwzr"
	I0629 20:28:08.820170       1 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0629 20:28:22.556575       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 20:28:22.624909       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0629 20:28:22.634758       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0629 20:28:22.661045       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-lvhfd"
	
	* 
	* ==> kube-controller-manager [684b999ec120] <==
	* E0629 20:30:14.036043       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0629 20:30:14.123755       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0629 20:30:14.427009       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:30:14.439491       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:30:14.439530       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 20:30:17.259330       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 20:30:17.634748       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0629 20:30:17.635607       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.656100       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.656282       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.822245       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.822300       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.822412       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:30:17.858815       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.858834       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:30:17.859613       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.858854       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:30:17.954867       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.955115       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:30:17.955198       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.954976       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:18.042694       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-bzssg"
	I0629 20:30:18.123225       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-4gmfw"
	E0629 20:30:43.986142       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:30:44.632714       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b0056db737a2] <==
	* I0629 20:29:59.249736       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 20:29:59.256288       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 20:29:59.324840       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 20:29:59.336506       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 20:29:59.341833       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 20:29:59.520736       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0629 20:29:59.520805       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0629 20:29:59.520851       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 20:30:00.020969       1 server_others.go:206] "Using iptables Proxier"
	I0629 20:30:00.022262       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 20:30:00.022324       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 20:30:00.022384       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 20:30:00.022581       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:30:00.023575       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:30:00.036190       1 server.go:661] "Version info" version="v1.24.2"
	I0629 20:30:00.036530       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:30:00.044436       1 config.go:317] "Starting service config controller"
	I0629 20:30:00.044547       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 20:30:00.044626       1 config.go:226] "Starting endpoint slice config controller"
	I0629 20:30:00.044640       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 20:30:00.045068       1 config.go:444] "Starting node config controller"
	I0629 20:30:00.045089       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 20:30:00.145376       1 shared_informer.go:262] Caches are synced for node config
	I0629 20:30:00.145656       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 20:30:00.145712       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [f051cef64fd5] <==
	* I0629 20:28:17.717881       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 20:28:17.723597       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 20:28:17.727762       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 20:28:17.733455       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 20:28:17.737485       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 20:28:17.821930       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0629 20:28:17.821985       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0629 20:28:17.822041       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 20:28:18.031340       1 server_others.go:206] "Using iptables Proxier"
	I0629 20:28:18.031425       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 20:28:18.031497       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 20:28:18.031520       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 20:28:18.031574       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:28:18.032166       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:28:18.032486       1 server.go:661] "Version info" version="v1.24.2"
	I0629 20:28:18.032667       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:28:18.034096       1 config.go:444] "Starting node config controller"
	I0629 20:28:18.034124       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 20:28:18.035115       1 config.go:317] "Starting service config controller"
	I0629 20:28:18.035131       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 20:28:18.035193       1 config.go:226] "Starting endpoint slice config controller"
	I0629 20:28:18.035201       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 20:28:18.134630       1 shared_informer.go:262] Caches are synced for node config
	I0629 20:28:18.136116       1 shared_informer.go:262] Caches are synced for service config
	I0629 20:28:18.136803       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [52b391af30b5] <==
	* W0629 20:29:45.023761       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0629 20:29:46.242528       1 serving.go:348] Generated self-signed cert in-memory
	W0629 20:29:52.220451       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 20:29:52.221286       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 20:29:52.221347       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 20:29:52.221364       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 20:29:52.336202       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 20:29:52.336305       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:29:52.338962       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 20:29:52.339092       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 20:29:52.338989       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 20:29:52.339011       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 20:29:52.439482       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f627e4c91d25] <==
	* E0629 20:27:47.752031       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0629 20:27:47.823204       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 20:27:47.823797       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 20:27:47.825195       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0629 20:27:47.825303       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0629 20:27:47.955730       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 20:27:47.955872       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 20:27:48.022606       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0629 20:27:48.023275       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0629 20:27:48.023519       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 20:27:48.023554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 20:27:48.153943       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 20:27:48.154162       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 20:27:48.221494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 20:27:48.221687       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 20:27:48.280748       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 20:27:48.280874       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 20:27:48.321308       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 20:27:48.321423       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 20:27:48.321435       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 20:27:48.321456       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0629 20:27:50.437458       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 20:28:30.928860       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0629 20:28:30.929807       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 20:28:30.930706       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 20:29:06 UTC, end at Wed 2022-06-29 20:31:20 UTC. --
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         rpc error: code = Unknown desc = [failed to set up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: "crio" id: "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         ]
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:  >
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:45.175751    1238 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         rpc error: code = Unknown desc = [failed to set up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: "crio" id: "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         ]
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:  > pod="kube-system/metrics-server-5c6f97fb75-lvhfd"
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:45.175797    1238 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         rpc error: code = Unknown desc = [failed to set up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: "crio" id: "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         ]
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:  > pod="kube-system/metrics-server-5c6f97fb75-lvhfd"
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:45.176144    1238 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c6f97fb75-lvhfd_kube-system(96cb2775-91a7-44ab-9aa2-f8059cb6bc1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c6f97fb75-lvhfd_kube-system(96cb2775-91a7-44ab-9aa2-f8059cb6bc1f)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1\\\" network for pod \\\"metrics-server-5c6f97fb75-lvhfd\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c6f97fb75-lvhfd_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1\\\" network for pod \\\"metrics-server-5c6f97fb75-lvhfd\\\": networkPlugin cni failed to teardown po
d \\\"metrics-server-5c6f97fb75-lvhfd_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: \\\"crio\\\" id: \\\"cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c6f97fb75-lvhfd" podUID=96cb2775-91a7-44ab-9aa2-f8059cb6bc1f
	Jun 29 20:30:46 newest-cni-20220629202523-2408 kubelet[1238]: I0629 20:30:46.328451    1238 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1"
	Jun 29 20:30:46 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:46.946781    1238 kuberuntime_manager.go:1051] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912" podSandboxID="40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912" pod="kube-system/coredns-6d4b75cb6d-hnxmm"
	Jun 29 20:30:46 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:46.947012    1238 generic.go:415] "PLEG: Write status" err="rpc error: code = Unknown desc = Error: No such container: 40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912" pod="kube-system/coredns-6d4b75cb6d-hnxmm"
	Jun 29 20:30:47 newest-cni-20220629202523-2408 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 29 20:30:47 newest-cni-20220629202523-2408 kubelet[1238]: I0629 20:30:47.635839    1238 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 29 20:30:47 newest-cni-20220629202523-2408 systemd[1]: kubelet.service: Succeeded.
	Jun 29 20:30:47 newest-cni-20220629202523-2408 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [207c2317d2fa] <==
	* I0629 20:30:32.533681       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 20:30:32.842049       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 20:30:32.842228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [c080efdf686a] <==
	* I0629 20:29:59.339567       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0629 20:30:20.442501       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 20:31:17.961361    8896 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408
E0629 20:31:21.422655    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:31:27.694808    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408: exit status 2 (8.214313s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-20220629202523-2408" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220629202523-2408
helpers_test.go:231: (dbg) Done: docker inspect newest-cni-20220629202523-2408: (1.2576402s)
helpers_test.go:235: (dbg) docker inspect newest-cni-20220629202523-2408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066",
	        "Created": "2022-06-29T20:26:28.4280678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350986,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T20:29:05.8457271Z",
	            "FinishedAt": "2022-06-29T20:28:42.0111984Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/hostname",
	        "HostsPath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/hosts",
	        "LogPath": "/var/lib/docker/containers/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066/ebf2518a0e69a5effb731788d1ad1912f8d537997f2d90c7b01ff0020f186066-json.log",
	        "Name": "/newest-cni-20220629202523-2408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220629202523-2408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220629202523-2408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc-init/diff:/var/lib/docker/overlay2/18fa2cfa420a1e80c1aefe4442db7e09e685eb6b69d30a3f812abf0fcd5b1ce8/diff:/var/lib/docker/overlay2/aba3e90b0a8f2d7eefad9e62fda91f6713fdc48732352c033f55a5f2fb9d5051/diff:/var/lib/docker/overlay2/6e33975e7a2b5eb470d2cc19f49dfd8506b5158029ca29518653c9de49149fa9/diff:/var/lib/docker/overlay2/0037ae946c15e22839a7ac209758f6fe7b71b326b867a9ce04ec676f5b8c06a6/diff:/var/lib/docker/overlay2/9c42624bebccf9152803eaad763e31ce035bdcdde0f54bfd5c88b9bb436d8327/diff:/var/lib/docker/overlay2/84521428dc63a36c9c8e902e4a72309e30edf7ca74fce9fc847a1f0322dbc53f/diff:/var/lib/docker/overlay2/7d7f88709e16b5aae440b1e298e370c888250af23e45a901effd41cf24361c60/diff:/var/lib/docker/overlay2/ec651b0921231e96280abd101a8af5a63c74f75e5393917c7c51a4779e8c18ee/diff:/var/lib/docker/overlay2/de54eba4af17491eb746d5d519e3e9d2209bb7e77a7e5e97a3fed0e5222cc91b/diff:/var/lib/docker/overlay2/41f2a6
c56ef2a3c6f7de181184e718ecb06cff24d2f3067f95f7609c8428890c/diff:/var/lib/docker/overlay2/dadd972d4b0ae7e16296c1fd2116b2362dcd68c94ca80683b16746f9f9af4c04/diff:/var/lib/docker/overlay2/d07ed1db13541e2b4edaad932df907a36057e8115f039c12379e1f4bd9358fcc/diff:/var/lib/docker/overlay2/2ea8ed9010b183040dd8663549244a49bf69bebdb52d48dcdfab8bb80ae569e9/diff:/var/lib/docker/overlay2/2b3aef18028ba313056c34b21dd2fe925b2a075b71ce79d4a700666a4a1294f3/diff:/var/lib/docker/overlay2/4f2c4fada74eb6f2253a2e6e3e69366c21a0e146314e507111b068a94431e118/diff:/var/lib/docker/overlay2/eebb16c3252fcc56a8f29f6f4cc140749f09d91c7618992ef26ccd17bc7326a8/diff:/var/lib/docker/overlay2/4c9fb9630f6a81f45d6683e4b35bd45b802de81702df3682376cd5eded2c6293/diff:/var/lib/docker/overlay2/7966fc785bbb93b70572461c2b75d02d408e500cbfbe9fb28a85610069e53048/diff:/var/lib/docker/overlay2/7ccd830d8272e56eb8af3cb67fd85111a15d0bd24740b16d9820d03e8b5e613b/diff:/var/lib/docker/overlay2/98991bbd08d46d706f89f20373025f42bb1eca28599ab9d368327d28d37da3e5/diff:/var/lib/d
ocker/overlay2/acd0db325a9cc956c00473cca2cbe9e8938e54e309f7812ce96651505f2c026c/diff:/var/lib/docker/overlay2/a952097255f1545a148e11dc183ed9d457d086b3b19e6cf5c0a84d334a7868fa/diff:/var/lib/docker/overlay2/3626d83dcea23ee4d5fc8d381d865b85ce85b9ec935c11ea07472acad97752b1/diff:/var/lib/docker/overlay2/daee0769e25b6c6df3644bb280d7cdd0552baeddf00f478d726753a2f02990c4/diff:/var/lib/docker/overlay2/5884da0a2a1c8a365253922fefbfd2861b15e6174a57089f167640bf48fab86d/diff:/var/lib/docker/overlay2/aee5fb879dea6a5dd5d33604a38bc85cef9e6fac8b91d40eae0556920c1f013e/diff:/var/lib/docker/overlay2/329fa8dc36977e4ddcb0c9d5de68a736a48555759a67cc6b901c51a7d20bc940/diff:/var/lib/docker/overlay2/a83adaf465d339d1dbbb19a8e721a3af2dde845d387565c2c23aa55a2a9b3050/diff:/var/lib/docker/overlay2/1eb7c3b1832b132fab8951130f1f3e7525eb849dfd649d730922f192509da8c7/diff:/var/lib/docker/overlay2/63900a7721a42c32f20ce20e83aa0648dc9f1f96e2c44c60b949cdf2ed635b89/diff:/var/lib/docker/overlay2/b1f2b06276b16264d5e1dd74a450a8433b30f118bdbda62a0be9806dc63
962c9/diff:/var/lib/docker/overlay2/1d38b89baa7faea58d17550171f82f91e22823fa4687739f8e96012ba2d6b8bc/diff:/var/lib/docker/overlay2/6a845f21cdc42782d41ea29b6b6d28d87b17e628822711a35d5986ee5327afe9/diff:/var/lib/docker/overlay2/e64e9638ae14983a1fadb7196cfb18b26908f27c9c025d014d1b3e014fe592f2/diff:/var/lib/docker/overlay2/0e4706537848c7cd84366bdfaa32b5a3c84c900772b5ad83d1dfd507ddbfe686/diff:/var/lib/docker/overlay2/2bfed1b7470b0df7e7cb5905c1d1671735c6755b541e4f951e007994f0a090d8/diff:/var/lib/docker/overlay2/a337d8b9854e844eac3af70f23116c0353d23cc66cd2075e17b2f5c4daeb3a54/diff:/var/lib/docker/overlay2/ad5d4a9033e102f17440a355bb241c3e635b435132d5d3b83f45c913c2b142b9/diff:/var/lib/docker/overlay2/533adabc16e60531543e7123b3eb7c5db5a5d5c8b3ff5c5a58a357f9ce9b92a4/diff:/var/lib/docker/overlay2/004d9874f6692e521883f825fdecfdbc36c21b5776c23879841b4718d1b9f2ab/diff:/var/lib/docker/overlay2/aada4d8989429d7ecea61233a031b78c69ebd53862529973a4c5a3f581e5b2dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5eed5b47d30c4acedea4e16ac2e8c6c69fba0be93e23d0e2314bd756b573abbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220629202523-2408",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220629202523-2408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220629202523-2408",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220629202523-2408",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220629202523-2408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c1814467cb2af6ef0c2cb5d8841df4b4aaec337be4e74f3cdbd2bdbc57eeb39c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57670"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57671"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57672"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57673"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57674"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c1814467cb2a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220629202523-2408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ebf2518a0e69",
	                        "newest-cni-20220629202523-2408"
	                    ],
	                    "NetworkID": "e257354b1be03d8f64a2e06186e9fb8571000615e763dac00ac14f110afaf094",
	                    "EndpointID": "249769b8f3b9ab1af63129abf3499397d9cc5e0ff9e86c870167cd65dd189175",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408: exit status 2 (8.5974185s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-20220629202523-2408 logs -n 25
E0629 20:31:54.814832    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-20220629202523-2408 logs -n 25: (20.2446897s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | embed-certs-20220629201242-2408                            |          |                   |         |                     |                     |
	| start   | -p newest-cni-20220629202523-2408 --memory=2200            | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:28 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.24.2               |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT |                     |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:25 GMT |
	|         | old-k8s-version-20220629201126-2408                        |          |                   |         |                     |                     |
	| start   | -p auto-20220629200908-2408                                | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:28 GMT |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr                                          |          |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:25 GMT | 29 Jun 22 20:26 GMT |
	|         | no-preload-20220629201225-2408                             |          |                   |         |                     |                     |
	| start   | -p kindnet-20220629200924-2408                             | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:26 GMT | 29 Jun 22 20:29 GMT |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr                                          |          |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |          |                   |         |                     |                     |
	|         | --cni=kindnet --driver=docker                              |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:26 GMT | 29 Jun 22 20:27 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	| delete  | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:27 GMT | 29 Jun 22 20:27 GMT |
	|         | default-k8s-different-port-20220629201430-2408             |          |                   |         |                     |                     |
	| start   | -p cilium-20220629200933-2408                              | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:27 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |          |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                             |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:28 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |                   |         |                     |                     |
	| stop    | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:28 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:28 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |                   |         |                     |                     |
	| start   | -p newest-cni-20220629202523-2408 --memory=2200            | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:30 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.24.2               |          |                   |         |                     |                     |
	| ssh     | -p auto-20220629200908-2408                                | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:28 GMT | 29 Jun 22 20:29 GMT |
	|         | pgrep -a kubelet                                           |          |                   |         |                     |                     |
	| ssh     | -p kindnet-20220629200924-2408                             | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:29 GMT | 29 Jun 22 20:29 GMT |
	|         | pgrep -a kubelet                                           |          |                   |         |                     |                     |
	| delete  | -p auto-20220629200908-2408                                | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:29 GMT | 29 Jun 22 20:29 GMT |
	| start   | -p calico-20220629200933-2408                              | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:29 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |          |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=calico                             |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| delete  | -p kindnet-20220629200924-2408                             | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT | 29 Jun 22 20:30 GMT |
	| start   | -p false-20220629200924-2408                               | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT |                     |
	|         | --memory=2048                                              |          |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |          |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=false                              |          |                   |         |                     |                     |
	|         | --driver=docker                                            |          |                   |         |                     |                     |
	| ssh     | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT | 29 Jun 22 20:30 GMT |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |                   |         |                     |                     |
	| pause   | -p                                                         | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 20:30 GMT |                     |
	|         | newest-cni-20220629202523-2408                             |          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |                   |         |                     |                     |
	|---------|------------------------------------------------------------|----------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 20:30:28
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 20:30:28.756094   10908 out.go:296] Setting OutFile to fd 1644 ...
	I0629 20:30:28.812372   10908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:30:28.812372   10908 out.go:309] Setting ErrFile to fd 1676...
	I0629 20:30:28.812372   10908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 20:30:28.843293   10908 out.go:303] Setting JSON to false
	I0629 20:30:28.846260   10908 start.go:115] hostinfo: {"hostname":"minikube8","uptime":27191,"bootTime":1656507437,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 20:30:28.846432   10908 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 20:30:28.851790   10908 out.go:177] * [false-20220629200924-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 20:30:28.856005   10908 notify.go:193] Checking for updates...
	I0629 20:30:28.869788   10908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 20:30:28.877765   10908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 20:30:28.889217   10908 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 20:30:28.895141   10908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 20:30:31.850514   11200 cli_runner.go:217] Completed: docker run --rm --name calico-20220629200933-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --entrypoint /usr/bin/test -v calico-20220629200933-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib: (6.9828408s)
	I0629 20:30:31.850590   11200 oci.go:107] Successfully prepared a docker volume calico-20220629200933-2408
	I0629 20:30:31.850590   11200 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:30:31.850590   11200 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 20:30:31.863489   11200 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220629200933-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 20:30:28.900508   10908 config.go:178] Loaded profile config "calico-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:30:28.901036   10908 config.go:178] Loaded profile config "cilium-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:30:28.901211   10908 config.go:178] Loaded profile config "newest-cni-20220629202523-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:30:28.901739   10908 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 20:30:32.845171   10908 docker.go:137] docker version: linux-20.10.16
	I0629 20:30:32.862480   10908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:30:29.475080    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:31.930159    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:35.577726   10908 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.7152297s)
	I0629 20:30:35.578341   10908 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:82 OomKillDisable:true NGoroutines:77 SystemTime:2022-06-29 20:30:34.2387692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:30:35.583254   10908 out.go:177] * Using the docker driver based on user configuration
	I0629 20:30:35.586139   10908 start.go:284] selected driver: docker
	I0629 20:30:35.586139   10908 start.go:808] validating driver "docker" against <nil>
	I0629 20:30:35.586679   10908 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 20:30:35.665624   10908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:30:38.246334   10908 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5806942s)
	I0629 20:30:38.246664   10908 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:81 OomKillDisable:true NGoroutines:70 SystemTime:2022-06-29 20:30:37.0141853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:30:38.246664   10908 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 20:30:38.249085   10908 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 20:30:38.252998   10908 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 20:30:38.257199   10908 cni.go:95] Creating CNI manager for "false"
	I0629 20:30:38.257294   10908 start_flags.go:310] config:
	{Name:false-20220629200924-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:false-20220629200924-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 20:30:38.270809   10908 out.go:177] * Starting control plane node false-20220629200924-2408 in cluster false-20220629200924-2408
	I0629 20:30:38.309826   10908 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 20:30:38.340692   10908 out.go:177] * Pulling base image ...
	I0629 20:30:38.343731   10908 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:30:38.343891   10908 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 20:30:38.344092   10908 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 20:30:38.344206   10908 cache.go:57] Caching tarball of preloaded images
	I0629 20:30:38.344822   10908 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 20:30:38.345047   10908 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 20:30:38.345047   10908 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\config.json ...
	I0629 20:30:38.345745   10908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\config.json: {Name:mkff02bed303d85f7f66d86bc6e26657facaf7f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 20:30:34.347638    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:36.349254    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:38.425395    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:39.763261   10908 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 20:30:39.763261   10908 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 20:30:39.763261   10908 cache.go:208] Successfully downloaded all kic artifacts
	I0629 20:30:39.763261   10908 start.go:352] acquiring machines lock for false-20220629200924-2408: {Name:mk4e7ee60eadc570bee66017265e0ca36038179d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 20:30:39.763261   10908 start.go:356] acquired machines lock for "false-20220629200924-2408" in 0s
	I0629 20:30:39.764046   10908 start.go:91] Provisioning new machine with config: &{Name:false-20220629200924-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:false-20220629200924-2408 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 20:30:39.764046   10908 start.go:131] createHost starting for "" (driver="docker")
	I0629 20:30:39.775597   10908 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0629 20:30:39.775597   10908 start.go:165] libmachine.API.Create for "false-20220629200924-2408" (driver="docker")
	I0629 20:30:39.775597   10908 client.go:168] LocalClient.Create starting
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Decoding PEM data...
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Parsing certificate...
	I0629 20:30:39.777098   10908 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0629 20:30:39.777760   10908 main.go:134] libmachine: Decoding PEM data...
	I0629 20:30:39.777829   10908 main.go:134] libmachine: Parsing certificate...
	I0629 20:30:39.788878   10908 cli_runner.go:164] Run: docker network inspect false-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 20:30:41.211116   10908 cli_runner.go:211] docker network inspect false-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 20:30:41.211116   10908 cli_runner.go:217] Completed: docker network inspect false-20220629200924-2408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.4219556s)
	I0629 20:30:41.220685   10908 network_create.go:272] running [docker network inspect false-20220629200924-2408] to gather additional debugging logs...
	I0629 20:30:41.220749   10908 cli_runner.go:164] Run: docker network inspect false-20220629200924-2408
	W0629 20:30:42.530669   10908 cli_runner.go:211] docker network inspect false-20220629200924-2408 returned with exit code 1
	I0629 20:30:42.530924   10908 cli_runner.go:217] Completed: docker network inspect false-20220629200924-2408: (1.3099118s)
	I0629 20:30:42.530924   10908 network_create.go:275] error running [docker network inspect false-20220629200924-2408]: docker network inspect false-20220629200924-2408: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220629200924-2408
	I0629 20:30:42.531121   10908 network_create.go:277] output of [docker network inspect false-20220629200924-2408]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220629200924-2408
	
	** /stderr **
	I0629 20:30:42.545398   10908 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 20:30:40.426789    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:42.728994    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:43.940320   10908 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.394913s)
	I0629 20:30:43.971922   10908 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004ee248] misses:0}
	I0629 20:30:43.971922   10908 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:43.971922   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 20:30:43.986113   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:45.464727   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:45.464727   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4784873s)
	W0629 20:30:45.464727   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.49.0/24, will retry: subnet is taken
	I0629 20:30:45.490978   10908 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:false}} dirty:map[] misses:0}
	I0629 20:30:45.491057   10908 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:45.516438   10908 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240] misses:0}
	I0629 20:30:45.516438   10908 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:45.516438   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 20:30:45.529832   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:47.026619   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:47.026703   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4936356s)
	W0629 20:30:47.026703   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.58.0/24, will retry: subnet is taken
	I0629 20:30:47.058714   10908 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240] misses:1}
	I0629 20:30:47.058864   10908 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:47.075965   10908 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00] misses:1}
	I0629 20:30:47.078710   10908 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:47.078710   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 20:30:47.085627   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:48.530934   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:48.531256   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4443639s)
	W0629 20:30:48.531327   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.67.0/24, will retry: subnet is taken
	I0629 20:30:48.562921   10908 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00] misses:2}
	I0629 20:30:48.562921   10908 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:48.580631   10908 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00 192.168.76.0:0xc0007082e0] misses:2}
	I0629 20:30:48.580631   10908 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:48.580631   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0629 20:30:48.586171   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	I0629 20:30:44.927214    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:46.928321    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	W0629 20:30:50.019582   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:50.019802   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.4332202s)
	W0629 20:30:50.019909   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.76.0/24, will retry: subnet is taken
	I0629 20:30:50.069909   10908 network.go:279] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00 192.168.76.0:0xc0007082e0] misses:3}
	I0629 20:30:50.069909   10908 network.go:238] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:50.100340   10908 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004ee248] amended:true}} dirty:map[192.168.49.0:0xc0004ee248 192.168.58.0:0xc000708240 192.168.67.0:0xc00014ec00 192.168.76.0:0xc0007082e0 192.168.85.0:0xc0004ee2e0] misses:3}
	I0629 20:30:50.100340   10908 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 20:30:50.100340   10908 network_create.go:115] attempt to create docker network false-20220629200924-2408 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0629 20:30:50.107951   10908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408
	W0629 20:30:51.446516   10908 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408 returned with exit code 1
	I0629 20:30:51.446599   10908 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-20220629200924-2408 false-20220629200924-2408: (1.3385565s)
	W0629 20:30:51.446599   10908 network_create.go:107] failed to create docker network false-20220629200924-2408 192.168.85.0/24, will retry: subnet is taken
	W0629 20:30:51.446599   10908 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create docker network false-20220629200924-2408: subnet is taken
	I0629 20:30:51.465379   10908 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 20:30:52.728102   10908 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2626696s)
	I0629 20:30:52.743536   10908 cli_runner.go:164] Run: docker volume create false-20220629200924-2408 --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --label created_by.minikube.sigs.k8s.io=true
	I0629 20:30:49.821935    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:51.850078    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:53.971974   10908 cli_runner.go:217] Completed: docker volume create false-20220629200924-2408 --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --label created_by.minikube.sigs.k8s.io=true: (1.2282819s)
	I0629 20:30:53.972080   10908 oci.go:103] Successfully created a docker volume false-20220629200924-2408
	I0629 20:30:53.980589   10908 cli_runner.go:164] Run: docker run --rm --name false-20220629200924-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --entrypoint /usr/bin/test -v false-20220629200924-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 20:30:53.928324    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:57.045666    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:30:59.506934   11200 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220629200933-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (27.6431254s)
	I0629 20:30:59.507099   11200 kic.go:188] duration metric: took 27.656338 seconds to extract preloaded images to volume
	I0629 20:30:59.517643   11200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:31:02.006801   11200 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4890818s)
	I0629 20:31:02.007356   11200 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:58 SystemTime:2022-06-29 20:31:00.7844516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:31:02.025993   11200 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 20:31:00.465881   10908 cli_runner.go:217] Completed: docker run --rm --name false-20220629200924-2408-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --entrypoint /usr/bin/test -v false-20220629200924-2408:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib: (6.4842027s)
	I0629 20:31:00.465881   10908 oci.go:107] Successfully prepared a docker volume false-20220629200924-2408
	I0629 20:31:00.465881   10908 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 20:31:00.465881   10908 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 20:31:00.473253   10908 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220629200924-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 20:30:59.429983    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:01.937267    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:04.472654   11200 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.4466455s)
	I0629 20:31:04.483917   11200 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220629200933-2408 --name calico-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220629200933-2408 --network calico-20220629200933-2408 --ip 192.168.67.2 --volume calico-20220629200933-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 20:31:04.328058    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:06.339745    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:08.439201    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:08.306388   11200 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220629200933-2408 --name calico-20220629200933-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629200933-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220629200933-2408 --network calico-20220629200933-2408 --ip 192.168.67.2 --volume calico-20220629200933-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e: (3.8219335s)
	I0629 20:31:08.323885   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Running}}
	I0629 20:31:09.819616   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Running}}: (1.4954672s)
	I0629 20:31:09.843966   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:31:11.120595   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.2763252s)
	I0629 20:31:11.137469   11200 cli_runner.go:164] Run: docker exec calico-20220629200933-2408 stat /var/lib/dpkg/alternatives/iptables
	I0629 20:31:12.623194   11200 cli_runner.go:217] Completed: docker exec calico-20220629200933-2408 stat /var/lib/dpkg/alternatives/iptables: (1.4852834s)
	I0629 20:31:12.623248   11200 oci.go:144] the created container "calico-20220629200933-2408" has a running status.
	I0629 20:31:12.623248   11200 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa...
	I0629 20:31:10.926673    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:13.342502    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:12.979986   11200 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 20:31:14.312839   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:31:15.487747   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.1749008s)
	I0629 20:31:15.501479   11200 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 20:31:15.501479   11200 kic_runner.go:114] Args: [docker exec --privileged calico-20220629200933-2408 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 20:31:16.876587   11200 kic_runner.go:123] Done: [docker exec --privileged calico-20220629200933-2408 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3751s)
	I0629 20:31:16.881689   11200 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa...
	I0629 20:31:17.417877   11200 cli_runner.go:164] Run: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}
	I0629 20:31:15.832616    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:17.842942    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:18.709731   11200 cli_runner.go:217] Completed: docker container inspect calico-20220629200933-2408 --format={{.State.Status}}: (1.2918464s)
	I0629 20:31:18.709731   11200 machine.go:88] provisioning docker machine ...
	I0629 20:31:18.709731   11200 ubuntu.go:169] provisioning hostname "calico-20220629200933-2408"
	I0629 20:31:18.718369   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:20.026864   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3079337s)
	I0629 20:31:20.039119   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:20.039568   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:20.040125   11200 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220629200933-2408 && echo "calico-20220629200933-2408" | sudo tee /etc/hostname
	I0629 20:31:20.350217   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220629200933-2408
	
	I0629 20:31:20.358064   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:21.722535   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3643657s)
	I0629 20:31:21.730192   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:21.730878   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:21.730878   11200 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220629200933-2408' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220629200933-2408/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220629200933-2408' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 20:31:22.002563   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 20:31:22.002631   11200 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0629 20:31:22.002703   11200 ubuntu.go:177] setting up certificates
	I0629 20:31:22.002703   11200 provision.go:83] configureAuth start
	I0629 20:31:22.012290   11200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408
	I0629 20:31:20.042017    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:22.532471    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:23.278640   11200 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408: (1.2663416s)
	I0629 20:31:23.278810   11200 provision.go:138] copyHostCerts
	I0629 20:31:23.279401   11200 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0629 20:31:23.279438   11200 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0629 20:31:23.280215   11200 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0629 20:31:23.281592   11200 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0629 20:31:23.281592   11200 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0629 20:31:23.282629   11200 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0629 20:31:23.284053   11200 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0629 20:31:23.284053   11200 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0629 20:31:23.284812   11200 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0629 20:31:23.286031   11200 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220629200933-2408 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220629200933-2408]
	I0629 20:31:23.964618   11200 provision.go:172] copyRemoteCerts
	I0629 20:31:23.975171   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 20:31:23.975405   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:25.315331   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3399178s)
	I0629 20:31:25.315593   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:25.452522   11200 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4773422s)
	I0629 20:31:25.453152   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 20:31:25.510977   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0629 20:31:25.577074   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 20:31:25.654231   11200 provision.go:86] duration metric: configureAuth took 3.651408s
	I0629 20:31:25.654280   11200 ubuntu.go:193] setting minikube options for container-runtime
	I0629 20:31:25.654535   11200 config.go:178] Loaded profile config "calico-20220629200933-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 20:31:25.666746   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:26.976239   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.3094172s)
	I0629 20:31:26.982725   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:26.982996   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:26.982996   11200 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 20:31:27.201950   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 20:31:27.201950   11200 ubuntu.go:71] root file system type: overlay
	I0629 20:31:27.202669   11200 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 20:31:27.217196   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:24.925861    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:27.335089    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:28.447616   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.2303525s)
	I0629 20:31:28.592221   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:28.592894   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:28.592894   11200 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 20:31:28.904484   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 20:31:28.912351   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:30.186027   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.2736685s)
	I0629 20:31:30.192326   11200 main.go:134] libmachine: Using SSH client type: native
	I0629 20:31:30.192952   11200 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d3d20] 0x13d6b80 <nil>  [] 0s} 127.0.0.1 57864 <nil> <nil>}
	I0629 20:31:30.192952   11200 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 20:31:31.771427   11200 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 20:31:28.880096000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0629 20:31:31.771502   11200 machine.go:91] provisioned docker machine in 13.0616912s
	I0629 20:31:31.771558   11200 client.go:171] LocalClient.Create took 1m19.1518253s
	I0629 20:31:31.771625   11200 start.go:173] duration metric: libmachine.API.Create for "calico-20220629200933-2408" took 1m19.1519655s
	I0629 20:31:31.771682   11200 start.go:306] post-start starting for "calico-20220629200933-2408" (driver="docker")
	I0629 20:31:31.771682   11200 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 20:31:31.792492   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 20:31:31.797238   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:29.696961   10908 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220629200924-2408:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (29.2234506s)
	I0629 20:31:29.696961   10908 kic.go:188] duration metric: took 29.230902 seconds to extract preloaded images to volume
	I0629 20:31:29.705959   10908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 20:31:32.127940   10908 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.421854s)
	I0629 20:31:32.128307   10908 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:63 SystemTime:2022-06-29 20:31:31.0137702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 20:31:32.139594   10908 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 20:31:29.496747    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:31.780124    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:33.825911    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:33.031803   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.2343254s)
	I0629 20:31:33.032492   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:33.199431   11200 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4069308s)
	I0629 20:31:33.217784   11200 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 20:31:33.233977   11200 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 20:31:33.233977   11200 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 20:31:33.233977   11200 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 20:31:33.233977   11200 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 20:31:33.233977   11200 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0629 20:31:33.235756   11200 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0629 20:31:33.236018   11200 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem -> 24082.pem in /etc/ssl/certs
	I0629 20:31:33.256793   11200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 20:31:33.313245   11200 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\24082.pem --> /etc/ssl/certs/24082.pem (1708 bytes)
	I0629 20:31:33.387757   11200 start.go:309] post-start completed in 1.616065s
	I0629 20:31:33.400056   11200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408
	I0629 20:31:34.650433   11200 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408: (1.2502687s)
	I0629 20:31:34.650750   11200 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220629200933-2408\config.json ...
	I0629 20:31:34.669785   11200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 20:31:34.685426   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:36.047065   11200 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408: (1.361631s)
	I0629 20:31:36.047689   11200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57864 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220629200933-2408\id_rsa Username:docker}
	I0629 20:31:36.215741   11200 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.545947s)
	I0629 20:31:36.240921   11200 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 20:31:36.272043   11200 start.go:134] duration metric: createHost completed in 1m23.6558595s
	I0629 20:31:36.272133   11200 start.go:81] releasing machines lock for "calico-20220629200933-2408", held for 1m23.6565563s
	I0629 20:31:36.283925   11200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408
	I0629 20:31:37.691204   11200 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629200933-2408: (1.4070651s)
	I0629 20:31:37.698182   11200 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 20:31:37.708827   11200 ssh_runner.go:195] Run: systemctl --version
	I0629 20:31:37.709875   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:37.719691   11200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629200933-2408
	I0629 20:31:34.469313   10908 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.3294787s)
	I0629 20:31:34.478331   10908 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220629200924-2408 --name false-20220629200924-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220629200924-2408 --volume false-20220629200924-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 20:31:37.059322   10908 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220629200924-2408 --name false-20220629200924-2408 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220629200924-2408 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220629200924-2408 --volume false-20220629200924-2408:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e: (2.5809757s)
	I0629 20:31:37.073117   10908 cli_runner.go:164] Run: docker container inspect false-20220629200924-2408 --format={{.State.Running}}
	I0629 20:31:38.534545   10908 cli_runner.go:217] Completed: docker container inspect false-20220629200924-2408 --format={{.State.Running}}: (1.461317s)
	I0629 20:31:38.549608   10908 cli_runner.go:164] Run: docker container inspect false-20220629200924-2408 --format={{.State.Status}}
	I0629 20:31:36.429527    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	I0629 20:31:38.782915    3204 pod_ready.go:102] pod "cilium-c6rx7" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 20:29:06 UTC, end at Wed 2022-06-29 20:31:46 UTC. --
	Jun 29 20:29:27 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:27.497245100Z" level=info msg="API listen on [::]:2376"
	Jun 29 20:29:27 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:27.503198100Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 29 20:29:59 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:59.631716700Z" level=info msg="ignoring event" container=cce9f86aef65e4f988d13c57b994cad7b072a10647880fc3857556b47e645a02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:29:59 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:29:59.950897900Z" level=info msg="ignoring event" container=76b17f247f80f62522d5084d4c1eaa471bad54ddc72cf175aa16d18d845f20d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:07 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:07.524340400Z" level=info msg="ignoring event" container=f105824bbce3889e723d801668bef5e712f2c59229bcdff27e2a92e6877d2772 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:08 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:08.330934200Z" level=info msg="ignoring event" container=5a10cb291023324edad7facdbad74bd9d737cbdb1bc18909e39a6c0c7f9878aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:16 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:16.824864700Z" level=info msg="ignoring event" container=f45f943da9d6c298a0ef2943ea330f21896d24cb7412dbe8da3b438a4f34fdb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:16 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:16.928083800Z" level=info msg="ignoring event" container=68f93675da04344478bc9a35c46469e96c83b11c324fd7b7576ef1daed0c6d1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:20 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:20.520742400Z" level=info msg="ignoring event" container=c080efdf686acd01a1ea50216ae174bb64c63605b40e0287a256618d3ca51495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.526975300Z" level=info msg="ignoring event" container=9b62ae870231558321fc566c6b10479e3deca7ddef177cc1ede614f3aad57cb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.542689800Z" level=info msg="ignoring event" container=ca5ae665fa7045be385d0223f6e8e67235d04e70d35a4011dac566a258b0bab2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.727242400Z" level=info msg="ignoring event" container=7d333e159bd607ffa6ed7929406f00db647f0f5f7a9f43f0b2927065387e3514 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:30 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:30.750117000Z" level=info msg="ignoring event" container=e7e43dd850552c7c559946ac4e621f015304354cd5c6dda52571156ae8e1dab7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:34 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:34.820991500Z" level=info msg="ignoring event" container=40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:35 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:35.050682000Z" level=info msg="ignoring event" container=fe177266ce7dff86c0db664f4ecaee35e4d2a9146d61aa2d5d41574425756c87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:37 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:37.921152100Z" level=info msg="ignoring event" container=4425f8d84312a4e45b81abf13c4d1ac76e093e8ab424b5c56db7a1eee968c65e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:38 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:38.622365500Z" level=info msg="ignoring event" container=b6ef0c8d57ccedf445de7378fdd55254c47e72b9c570ed83082ea2aecb35494f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:40 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:40.171205200Z" level=info msg="ignoring event" container=fb99c265aa5daa267046ae779b84e8a948562b4603a470707e7a09b3102c3c6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:42 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:42.572632300Z" level=info msg="ignoring event" container=ddbed7307e1d0f14e286b61f5a3cba8e5bf30e23b6b4a2404f49b41c31b9d5de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:44 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:44.421807200Z" level=info msg="ignoring event" container=7c1e42b0c24620b3b634d8e9f6b3fee4bce28801ca6306b9624770a0a959c278 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:44 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:44.421919800Z" level=info msg="ignoring event" container=cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:47 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:47.851811700Z" level=info msg="ignoring event" container=f5b59333d6ead5a94e90c151e93bfdd67d94f67b7e23dc6d12062c79ddee11a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:49 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:49.343764700Z" level=info msg="ignoring event" container=f77b153c88c836607b0266447051f5d8a6cbb32d85aca9a9b6731365a7820fcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:50 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:50.541929500Z" level=info msg="ignoring event" container=8b82b8657b071e7b567157122ac4b53f3023343f2b6762c9092323318028fe75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 20:30:50 newest-cni-20220629202523-2408 dockerd[646]: time="2022-06-29T20:30:50.702277400Z" level=info msg="ignoring event" container=f7b7c8ab5fca80d83afa730e9a32cdc54ac80543a07dfbd998e506c5d56c5ab8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	207c2317d2fa9       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   b60fb563bb880
	c080efdf686ac       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   b60fb563bb880
	b0056db737a28       a634548d10b03       About a minute ago   Running             kube-proxy                1                   95e71a7ed45f4
	cf162ac653a9a       aebe758cef4cd       2 minutes ago        Running             etcd                      1                   ef8af2f0bf859
	a567d6fb06e72       d3377ffb7177c       2 minutes ago        Running             kube-apiserver            1                   cf3f3c9cbb06d
	684b999ec120f       34cdf99b1bb3b       2 minutes ago        Running             kube-controller-manager   1                   f494a82fa4306
	52b391af30b50       5d725196c1f47       2 minutes ago        Running             kube-scheduler            1                   527c1aafd76a0
	f051cef64fd54       a634548d10b03       3 minutes ago        Exited              kube-proxy                0                   95d1ab23bd51e
	f627e4c91d25c       5d725196c1f47       4 minutes ago        Exited              kube-scheduler            0                   a278411db9899
	299a6eb7a0743       34cdf99b1bb3b       4 minutes ago        Exited              kube-controller-manager   0                   c4f6e50e1cf1c
	451a01d3d1ae2       d3377ffb7177c       4 minutes ago        Exited              kube-apiserver            0                   343083e4a54fc
	7290718aacee2       aebe758cef4cd       4 minutes ago        Exited              etcd                      0                   cd4a2e65b939e
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Jun29 20:01] WSL2: Performing memory compaction.
	[Jun29 20:06] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000015] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000100] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000027] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +21.104341] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.081691] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000052] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:07] WSL2: Performing memory compaction.
	[Jun29 20:08] WSL2: Performing memory compaction.
	[Jun29 20:09] WSL2: Performing memory compaction.
	[Jun29 20:11] WSL2: Performing memory compaction.
	[Jun29 20:12] WSL2: Performing memory compaction.
	[Jun29 20:14] WSL2: Performing memory compaction.
	[Jun29 20:15] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun29 20:16] WSL2: Performing memory compaction.
	[Jun29 20:25] WSL2: Performing memory compaction.
	[Jun29 20:26] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [7290718aacee] <==
	* {"level":"warn","ts":"2022-06-29T20:28:14.041Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.3014817s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:28:14.041Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.9638ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638329353935374035 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20220629202523-2408\" mod_revision:305 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20220629202523-2408\" value_size:540 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20220629202523-2408\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-29T20:28:14.041Z","caller":"traceutil/trace.go:171","msg":"trace[51509022] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"119.584ms","start":"2022-06-29T20:28:13.921Z","end":"2022-06-29T20:28:14.041Z","steps":["trace[51509022] 'compare'  (duration: 118.5137ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:28:14.041Z","caller":"traceutil/trace.go:171","msg":"trace[1543502014] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:371; }","duration":"1.301803s","start":"2022-06-29T20:28:12.739Z","end":"2022-06-29T20:28:14.041Z","steps":["trace[1543502014] 'agreement among raft nodes before linearized reading'  (duration: 894.0009ms)","trace[1543502014] 'range keys from in-memory index tree'  (duration: 407.3965ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.042Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:28:12.739Z","time spent":"1.3023346s","remote":"127.0.0.1:33796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-06-29T20:28:14.346Z","caller":"traceutil/trace.go:171","msg":"trace[1229200613] linearizableReadLoop","detail":"{readStateIndex:386; appliedIndex:386; }","duration":"294.9095ms","start":"2022-06-29T20:28:14.051Z","end":"2022-06-29T20:28:14.346Z","steps":["trace[1229200613] 'read index received'  (duration: 294.8972ms)","trace[1229200613] 'applied index is now lower than readState.Index'  (duration: 8.7µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"328.3239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"329.3465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"}
	{"level":"info","ts":"2022-06-29T20:28:14.380Z","caller":"traceutil/trace.go:171","msg":"trace[1580264904] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:372; }","duration":"328.5382ms","start":"2022-06-29T20:28:14.052Z","end":"2022-06-29T20:28:14.380Z","steps":["trace[1580264904] 'agreement among raft nodes before linearized reading'  (duration: 294.4315ms)","trace[1580264904] 'range keys from in-memory index tree'  (duration: 33.8672ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T20:28:14.380Z","caller":"traceutil/trace.go:171","msg":"trace[594798499] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:372; }","duration":"329.4019ms","start":"2022-06-29T20:28:14.051Z","end":"2022-06-29T20:28:14.380Z","steps":["trace[594798499] 'agreement among raft nodes before linearized reading'  (duration: 295.045ms)","trace[594798499] 'range keys from in-memory index tree'  (duration: 34.2581ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:28:14.052Z","time spent":"328.6185ms","remote":"127.0.0.1:33796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-29T20:28:14.380Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:28:14.051Z","time spent":"329.4587ms","remote":"127.0.0.1:33784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":376,"request content":"key:\"/registry/namespaces/kube-system\" "}
	{"level":"info","ts":"2022-06-29T20:28:14.537Z","caller":"traceutil/trace.go:171","msg":"trace[636619072] linearizableReadLoop","detail":"{readStateIndex:387; appliedIndex:387; }","duration":"141.4121ms","start":"2022-06-29T20:28:14.395Z","end":"2022-06-29T20:28:14.537Z","steps":["trace[636619072] 'read index received'  (duration: 141.3939ms)","trace[636619072] 'applied index is now lower than readState.Index'  (duration: 13.1µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:14.671Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.9666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:28:14.671Z","caller":"traceutil/trace.go:171","msg":"trace[1653839816] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:373; }","duration":"276.2237ms","start":"2022-06-29T20:28:14.395Z","end":"2022-06-29T20:28:14.671Z","steps":["trace[1653839816] 'agreement among raft nodes before linearized reading'  (duration: 141.6783ms)","trace[1653839816] 'range keys from in-memory index tree'  (duration: 134.256ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:28:15.918Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.2265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:28:15.918Z","caller":"traceutil/trace.go:171","msg":"trace[1014052489] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:380; }","duration":"176.5514ms","start":"2022-06-29T20:28:15.742Z","end":"2022-06-29T20:28:15.918Z","steps":["trace[1014052489] 'agreement among raft nodes before linearized reading'  (duration: 78.1762ms)","trace[1014052489] 'range keys from in-memory index tree'  (duration: 97.9305ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T20:28:31.023Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T20:28:31.024Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220629202523-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/06/29 20:28:31 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 20:28:31 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T20:28:31.146Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-06-29T20:28:31.262Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T20:28:31.317Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T20:28:31.318Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220629202523-2408","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [cf162ac653a9] <==
	* {"level":"warn","ts":"2022-06-29T20:30:29.020Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.0759882s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2022-06-29T20:30:29.020Z","caller":"traceutil/trace.go:171","msg":"trace[2054658775] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.0769455s","start":"2022-06-29T20:30:26.943Z","end":"2022-06-29T20:30:29.020Z","steps":["trace[2054658775] 'agreement among raft nodes before linearized reading'  (duration: 2.0759199s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.021Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:26.943Z","time spent":"2.0772227s","remote":"127.0.0.1:37582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	WARNING: 2022/06/29 20:30:29 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-06-29T20:30:29.447Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"946.6014ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638329353967810021 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" value_size:675 lease:6414957317113033687 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-06-29T20:30:29.448Z","caller":"traceutil/trace.go:171","msg":"trace[1648148282] linearizableReadLoop","detail":"{readStateIndex:648; appliedIndex:647; }","duration":"4.5126934s","start":"2022-06-29T20:30:24.935Z","end":"2022-06-29T20:30:29.448Z","steps":["trace[1648148282] 'read index received'  (duration: 3.5656078s)","trace[1648148282] 'applied index is now lower than readState.Index'  (duration: 946.9977ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-29T20:30:29.448Z","caller":"traceutil/trace.go:171","msg":"trace[1422741100] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"2.4985543s","start":"2022-06-29T20:30:26.949Z","end":"2022-06-29T20:30:29.448Z","steps":["trace[1422741100] 'process raft request'  (duration: 1.5514023s)","trace[1422741100] 'compare'  (duration: 946.0539ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:30:29.448Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:26.949Z","time spent":"2.4986272s","remote":"127.0.0.1:37546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":784,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20220629202523-2408.16fd31776cbdbfa4\" value_size:675 lease:6414957317113033687 >> failure:<>"}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.5676222s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:30:29.485Z","caller":"traceutil/trace.go:171","msg":"trace[121820227] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"1.5678116s","start":"2022-06-29T20:30:27.917Z","end":"2022-06-29T20:30:29.485Z","steps":["trace[121820227] 'agreement among raft nodes before linearized reading'  (duration: 1.5675876s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:27.917Z","time spent":"1.567903s","remote":"127.0.0.1:37582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.8625835s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"558.072ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:30:29.485Z","caller":"traceutil/trace.go:171","msg":"trace[59355735] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:607; }","duration":"558.7302ms","start":"2022-06-29T20:30:28.927Z","end":"2022-06-29T20:30:29.485Z","steps":["trace[59355735] 'agreement among raft nodes before linearized reading'  (duration: 558.0743ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:30:29.486Z","caller":"traceutil/trace.go:171","msg":"trace[1404020785] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:607; }","duration":"1.862976s","start":"2022-06-29T20:30:27.623Z","end":"2022-06-29T20:30:29.486Z","steps":["trace[1404020785] 'agreement among raft nodes before linearized reading'  (duration: 1.8618991s)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:27.623Z","time spent":"1.8630568s","remote":"127.0.0.1:37560","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":366,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-06-29T20:30:29.485Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"454.9915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T20:30:29.486Z","caller":"traceutil/trace.go:171","msg":"trace[193731173] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"456.3624ms","start":"2022-06-29T20:30:29.030Z","end":"2022-06-29T20:30:29.486Z","steps":["trace[193731173] 'agreement among raft nodes before linearized reading'  (duration: 454.5962ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T20:30:29.486Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T20:30:29.030Z","time spent":"456.4366ms","remote":"127.0.0.1:37582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-29T20:30:45.547Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.1629ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638329353967810109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bzssg.16fd31787edbb028\" mod_revision:629 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bzssg.16fd31787edbb028\" value_size:656 lease:6414957317113033687 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bzssg.16fd31787edbb028\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-29T20:30:45.551Z","caller":"traceutil/trace.go:171","msg":"trace[1321139070] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"207.0658ms","start":"2022-06-29T20:30:45.343Z","end":"2022-06-29T20:30:45.550Z","steps":["trace[1321139070] 'process raft request'  (duration: 84.7525ms)","trace[1321139070] 'compare'  (duration: 113.1244ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T20:30:47.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"132.4763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"warn","ts":"2022-06-29T20:30:47.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.5506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1132"}
	{"level":"info","ts":"2022-06-29T20:30:47.854Z","caller":"traceutil/trace.go:171","msg":"trace[1512921429] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:642; }","duration":"132.5712ms","start":"2022-06-29T20:30:47.721Z","end":"2022-06-29T20:30:47.853Z","steps":["trace[1512921429] 'range keys from in-memory index tree'  (duration: 132.2496ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-29T20:30:47.854Z","caller":"traceutil/trace.go:171","msg":"trace[217357278] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:642; }","duration":"122.7726ms","start":"2022-06-29T20:30:47.731Z","end":"2022-06-29T20:30:47.854Z","steps":["trace[217357278] 'range keys from in-memory index tree'  (duration: 122.38ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:31:57 up  2:39,  0 users,  load average: 7.56, 8.35, 6.98
	Linux newest-cni-20220629202523-2408 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [451a01d3d1ae] <==
	* W0629 20:28:40.450960       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.468179       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.479733       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.486430       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.492784       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.519377       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.557397       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.574481       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.670187       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.724123       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.757110       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.776558       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.802936       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.813950       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.826892       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.828750       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.845695       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.929419       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.958747       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.968117       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.969610       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:40.981189       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:41.025905       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:41.069161       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 20:28:41.094014       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [a567d6fb06e7] <==
	* Trace[909673204]: [1.1853298s] [1.1853298s] END
	I0629 20:29:53.822720       1 trace.go:205] Trace[1286452990]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts/metrics-server/token,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:d32bd74d-99e6-4fd9-8a9d-1c8ab5bd420b,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:29:52.636) (total time: 1185ms):
	Trace[1286452990]: ---"Object stored in database" 1185ms (20:29:53.822)
	Trace[1286452990]: [1.1858741s] [1.1858741s] END
	I0629 20:29:59.425250       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 20:29:59.639556       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 20:30:00.120722       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 20:30:00.760513       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 20:30:01.121222       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0629 20:30:01.425417       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0629 20:30:14.132488       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0629 20:30:14.132489       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0629 20:30:16.437131       1 controller.go:611] quota admission added evaluator for: namespaces
	I0629 20:30:17.225297       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 20:30:18.779482       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.4.126]
	I0629 20:30:18.926963       1 controller.go:611] quota admission added evaluator for: endpoints
	I0629 20:30:19.320168       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.5.7]
	{"level":"warn","ts":"2022-06-29T20:30:26.934Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0027c0c40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2022-06-29T20:30:28.944Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0027c0c40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I0629 20:30:29.450864       1 trace.go:205] Trace[1590373146]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:eca5ed3f-b402-4fdb-a625-54c979b4d2dd,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 20:30:26.947) (total time: 2502ms):
	Trace[1590373146]: ---"Object stored in database" 2502ms (20:30:29.450)
	Trace[1590373146]: [2.5028662s] [2.5028662s] END
	I0629 20:30:29.488164       1 trace.go:205] Trace[1217702110]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:1eb26c05-0cd0-4713-9378-9ff48218abff,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Jun-2022 20:30:27.622) (total time: 1865ms):
	Trace[1217702110]: ---"About to write a response" 1865ms (20:30:29.487)
	Trace[1217702110]: [1.8658274s] [1.8658274s] END
	
	* 
	* ==> kube-controller-manager [299a6eb7a074] <==
	* I0629 20:28:03.818878       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0629 20:28:03.818877       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0629 20:28:03.819046       1 event.go:294] "Event occurred" object="newest-cni-20220629202523-2408" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220629202523-2408 event: Registered Node newest-cni-20220629202523-2408 in Controller"
	I0629 20:28:03.818405       1 shared_informer.go:262] Caches are synced for disruption
	I0629 20:28:03.819463       1 disruption.go:371] Sending events to api server.
	I0629 20:28:03.819494       1 shared_informer.go:262] Caches are synced for endpoint
	I0629 20:28:03.824403       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0629 20:28:03.824888       1 shared_informer.go:262] Caches are synced for GC
	I0629 20:28:03.826628       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 20:28:03.829101       1 shared_informer.go:262] Caches are synced for PVC protection
	I0629 20:28:04.218532       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:28:04.218710       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 20:28:04.218600       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:28:04.420493       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0629 20:28:04.542212       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rcjck"
	I0629 20:28:04.635181       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jjwzr"
	I0629 20:28:04.820001       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hnxmm"
	I0629 20:28:05.030364       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	E0629 20:28:05.126131       1 replica_set.go:550] sync "kube-system/coredns-6d4b75cb6d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-6d4b75cb6d": the object has been modified; please apply your changes to the latest version and try again
	I0629 20:28:07.055087       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-jjwzr"
	I0629 20:28:08.820170       1 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0629 20:28:22.556575       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 20:28:22.624909       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0629 20:28:22.634758       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0629 20:28:22.661045       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-lvhfd"
	
	* 
	* ==> kube-controller-manager [684b999ec120] <==
	* E0629 20:30:14.036043       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0629 20:30:14.123755       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0629 20:30:14.427009       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:30:14.439491       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 20:30:14.439530       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 20:30:17.259330       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 20:30:17.634748       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0629 20:30:17.635607       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.656100       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.656282       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.822245       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.822300       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.822412       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:30:17.858815       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.858834       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:30:17.859613       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.858854       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 20:30:17.954867       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:17.955115       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 20:30:17.955198       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 20:30:17.954976       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 20:30:18.042694       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-bzssg"
	I0629 20:30:18.123225       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-4gmfw"
	E0629 20:30:43.986142       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 20:30:44.632714       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b0056db737a2] <==
	* I0629 20:29:59.249736       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 20:29:59.256288       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 20:29:59.324840       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 20:29:59.336506       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 20:29:59.341833       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 20:29:59.520736       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0629 20:29:59.520805       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0629 20:29:59.520851       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 20:30:00.020969       1 server_others.go:206] "Using iptables Proxier"
	I0629 20:30:00.022262       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 20:30:00.022324       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 20:30:00.022384       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 20:30:00.022581       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:30:00.023575       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:30:00.036190       1 server.go:661] "Version info" version="v1.24.2"
	I0629 20:30:00.036530       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:30:00.044436       1 config.go:317] "Starting service config controller"
	I0629 20:30:00.044547       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 20:30:00.044626       1 config.go:226] "Starting endpoint slice config controller"
	I0629 20:30:00.044640       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 20:30:00.045068       1 config.go:444] "Starting node config controller"
	I0629 20:30:00.045089       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 20:30:00.145376       1 shared_informer.go:262] Caches are synced for node config
	I0629 20:30:00.145656       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 20:30:00.145712       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [f051cef64fd5] <==
	* I0629 20:28:17.717881       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0629 20:28:17.723597       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0629 20:28:17.727762       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0629 20:28:17.733455       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0629 20:28:17.737485       1 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0629 20:28:17.821930       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0629 20:28:17.821985       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0629 20:28:17.822041       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 20:28:18.031340       1 server_others.go:206] "Using iptables Proxier"
	I0629 20:28:18.031425       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 20:28:18.031497       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 20:28:18.031520       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 20:28:18.031574       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:28:18.032166       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 20:28:18.032486       1 server.go:661] "Version info" version="v1.24.2"
	I0629 20:28:18.032667       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:28:18.034096       1 config.go:444] "Starting node config controller"
	I0629 20:28:18.034124       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 20:28:18.035115       1 config.go:317] "Starting service config controller"
	I0629 20:28:18.035131       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 20:28:18.035193       1 config.go:226] "Starting endpoint slice config controller"
	I0629 20:28:18.035201       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 20:28:18.134630       1 shared_informer.go:262] Caches are synced for node config
	I0629 20:28:18.136116       1 shared_informer.go:262] Caches are synced for service config
	I0629 20:28:18.136803       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [52b391af30b5] <==
	* W0629 20:29:45.023761       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0629 20:29:46.242528       1 serving.go:348] Generated self-signed cert in-memory
	W0629 20:29:52.220451       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 20:29:52.221286       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 20:29:52.221347       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 20:29:52.221364       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 20:29:52.336202       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 20:29:52.336305       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 20:29:52.338962       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 20:29:52.339092       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 20:29:52.338989       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 20:29:52.339011       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 20:29:52.439482       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f627e4c91d25] <==
	* E0629 20:27:47.752031       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0629 20:27:47.823204       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 20:27:47.823797       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 20:27:47.825195       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0629 20:27:47.825303       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0629 20:27:47.955730       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 20:27:47.955872       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 20:27:48.022606       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0629 20:27:48.023275       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0629 20:27:48.023519       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 20:27:48.023554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 20:27:48.153943       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 20:27:48.154162       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 20:27:48.221494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 20:27:48.221687       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 20:27:48.280748       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 20:27:48.280874       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 20:27:48.321308       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 20:27:48.321423       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 20:27:48.321435       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 20:27:48.321456       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0629 20:27:50.437458       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 20:28:30.928860       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0629 20:28:30.929807       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 20:28:30.930706       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 20:29:06 UTC, end at Wed 2022-06-29 20:31:58 UTC. --
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         rpc error: code = Unknown desc = [failed to set up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: "crio" id: "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         ]
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:  >
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:45.175751    1238 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         rpc error: code = Unknown desc = [failed to set up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: "crio" id: "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         ]
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:  > pod="kube-system/metrics-server-5c6f97fb75-lvhfd"
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:45.175797    1238 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         rpc error: code = Unknown desc = [failed to set up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" network for pod "metrics-server-5c6f97fb75-lvhfd": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-lvhfd_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: "crio" id: "cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:         ]
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]:  > pod="kube-system/metrics-server-5c6f97fb75-lvhfd"
	Jun 29 20:30:45 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:45.176144    1238 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c6f97fb75-lvhfd_kube-system(96cb2775-91a7-44ab-9aa2-f8059cb6bc1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c6f97fb75-lvhfd_kube-system(96cb2775-91a7-44ab-9aa2-f8059cb6bc1f)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1\\\" network for pod \\\"metrics-server-5c6f97fb75-lvhfd\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c6f97fb75-lvhfd_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1\\\" network for pod \\\"metrics-server-5c6f97fb75-lvhfd\\\": networkPlugin cni failed to teardown po
d \\\"metrics-server-5c6f97fb75-lvhfd_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.29 -j CNI-e854091cec98e74c62257a9b -m comment --comment name: \\\"crio\\\" id: \\\"cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e854091cec98e74c62257a9b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c6f97fb75-lvhfd" podUID=96cb2775-91a7-44ab-9aa2-f8059cb6bc1f
	Jun 29 20:30:46 newest-cni-20220629202523-2408 kubelet[1238]: I0629 20:30:46.328451    1238 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="cc9262f592231a816707cd177effbec1b3abbefdc84a42b1aef0f1dbbdc82bb1"
	Jun 29 20:30:46 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:46.946781    1238 kuberuntime_manager.go:1051] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912" podSandboxID="40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912" pod="kube-system/coredns-6d4b75cb6d-hnxmm"
	Jun 29 20:30:46 newest-cni-20220629202523-2408 kubelet[1238]: E0629 20:30:46.947012    1238 generic.go:415] "PLEG: Write status" err="rpc error: code = Unknown desc = Error: No such container: 40bd4e0c2ba3f19a3ca69a5201436ed6212a30ea3d1e1b3c48f01dd6c2197912" pod="kube-system/coredns-6d4b75cb6d-hnxmm"
	Jun 29 20:30:47 newest-cni-20220629202523-2408 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 29 20:30:47 newest-cni-20220629202523-2408 kubelet[1238]: I0629 20:30:47.635839    1238 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 29 20:30:47 newest-cni-20220629202523-2408 systemd[1]: kubelet.service: Succeeded.
	Jun 29 20:30:47 newest-cni-20220629202523-2408 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [207c2317d2fa] <==
	* I0629 20:30:32.533681       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 20:30:32.842049       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 20:30:32.842228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [c080efdf686a] <==
	* I0629 20:29:59.339567       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0629 20:30:20.442501       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 20:31:57.085945    8176 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408: exit status 2 (8.6421265s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-20220629202523-2408" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (89.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (288.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5485066s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6556749s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:38:54.292564    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (18.3417089s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:39:19.850941    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5733537s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:39:30.906176    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.53129s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:39:47.714201    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5554396s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.569633s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.8205464s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5551515s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6105037s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5175232s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (288.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (304.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:40:17.155066    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4914993s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5199904s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:40:46.629969    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5472341s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5191787s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5185155s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:41:54.814481    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6647273s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:42:12.787529    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5542813s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4762686s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5396353s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:44:03.090152    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5368084s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 20:44:19.857586    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:44:59.408416    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:45:17.159662    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5358389s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (304.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (299.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:49:03.098665    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5073869s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:49:16.151212    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220629200908-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:49:17.210530    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:49:19.852901    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:49:52.660856    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:52.676731    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:52.692155    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:52.723769    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:52.771839    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:52.864942    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:53.037111    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:53.367927    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:49:54.014013    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:55.308512    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:57.873516    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:49:59.400415    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:50:02.999658    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:50:13.255904    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:50:17.158889    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 20:50:17.519792    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 20:50:26.279062    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:50:33.745008    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:50:39.145376    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:50:43.092853    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:50:46.633716    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:51:14.708542    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:51:32.195775    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:51:54.808118    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 20:52:00.002485    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:52:12.802256    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:52:36.637569    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:52:55.191267    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0629 20:53:22.998678    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:53:54.286658    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (96.3µs)
net_test.go:175: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:180: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (299.39s)

                                                
                                    

Test pass (234/270)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 20.13
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.45
10 TestDownloadOnly/v1.24.2/json-events 15.27
11 TestDownloadOnly/v1.24.2/preload-exists 0
14 TestDownloadOnly/v1.24.2/kubectl 0
15 TestDownloadOnly/v1.24.2/LogsDuration 0.67
16 TestDownloadOnly/DeleteAll 11.77
17 TestDownloadOnly/DeleteAlwaysSucceeds 7.41
18 TestDownloadOnlyKic 46.33
19 TestBinaryMirror 17.09
20 TestOffline 217.5
22 TestAddons/Setup 424.68
26 TestAddons/parallel/MetricsServer 13.13
27 TestAddons/parallel/HelmTiller 35.78
29 TestAddons/parallel/CSI 86.17
30 TestAddons/parallel/Headlamp 36.69
32 TestAddons/serial/GCPAuth 28.41
33 TestAddons/StoppedEnableDisable 24.72
34 TestCertOptions 183.34
35 TestCertExpiration 428.13
36 TestDockerFlags 208.62
37 TestForceSystemdFlag 206.06
38 TestForceSystemdEnv 172.24
43 TestErrorSpam/setup 115.25
44 TestErrorSpam/start 22.34
45 TestErrorSpam/status 20.12
46 TestErrorSpam/pause 17.72
47 TestErrorSpam/unpause 18.19
48 TestErrorSpam/stop 34.19
51 TestFunctional/serial/CopySyncFile 0.03
52 TestFunctional/serial/StartWithProxy 130.42
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 63.48
55 TestFunctional/serial/KubeContext 0.18
56 TestFunctional/serial/KubectlGetPods 0.39
59 TestFunctional/serial/CacheCmd/cache/add_remote 18.69
60 TestFunctional/serial/CacheCmd/cache/add_local 9.58
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.36
62 TestFunctional/serial/CacheCmd/cache/list 0.35
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 6.57
64 TestFunctional/serial/CacheCmd/cache/cache_reload 26.02
65 TestFunctional/serial/CacheCmd/cache/delete 0.74
66 TestFunctional/serial/MinikubeKubectlCmd 2.15
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.11
68 TestFunctional/serial/ExtraConfig 88.09
69 TestFunctional/serial/ComponentHealth 0.24
70 TestFunctional/serial/LogsCmd 7.82
71 TestFunctional/serial/LogsFileCmd 9.06
73 TestFunctional/parallel/ConfigCmd 2.24
75 TestFunctional/parallel/DryRun 12.94
76 TestFunctional/parallel/InternationalLanguage 5.37
77 TestFunctional/parallel/StatusCmd 20.85
82 TestFunctional/parallel/AddonsCmd 3.75
83 TestFunctional/parallel/PersistentVolumeClaim 57.33
85 TestFunctional/parallel/SSHCmd 15.51
86 TestFunctional/parallel/CpCmd 27.64
87 TestFunctional/parallel/MySQL 74.88
88 TestFunctional/parallel/FileSync 6.65
89 TestFunctional/parallel/CertSync 39.12
93 TestFunctional/parallel/NodeLabels 0.22
95 TestFunctional/parallel/NonActiveRuntimeDisabled 6.54
97 TestFunctional/parallel/DockerEnv/powershell 31.09
98 TestFunctional/parallel/ImageCommands/ImageListShort 4.4
99 TestFunctional/parallel/ImageCommands/ImageListTable 4.44
100 TestFunctional/parallel/ImageCommands/ImageListJson 4.34
101 TestFunctional/parallel/ImageCommands/ImageListYaml 4.48
102 TestFunctional/parallel/ImageCommands/ImageBuild 20.22
103 TestFunctional/parallel/ImageCommands/Setup 6.4
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.77
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 20.58
110 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 15.97
111 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 26.8
112 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.25
113 TestFunctional/parallel/ProfileCmd/profile_not_create 10.33
114 TestFunctional/parallel/ImageCommands/ImageRemove 9.17
115 TestFunctional/parallel/ProfileCmd/profile_list 7.21
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 14.27
117 TestFunctional/parallel/ProfileCmd/profile_json_output 7.68
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 12.79
119 TestFunctional/parallel/UpdateContextCmd/no_changes 4.24
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 4.11
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 4.19
122 TestFunctional/parallel/Version/short 0.36
123 TestFunctional/parallel/Version/components 6.23
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
129 TestFunctional/delete_addon-resizer_images 0.02
130 TestFunctional/delete_my-image_image 0.01
131 TestFunctional/delete_minikube_cached_images 0.01
134 TestIngressAddonLegacy/StartLegacyK8sCluster 145.17
136 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 50.02
137 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 4.78
141 TestJSONOutput/start/Command 139.7
142 TestJSONOutput/start/Audit 0
144 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/pause/Command 6.2
148 TestJSONOutput/pause/Audit 0
150 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/unpause/Command 6.22
154 TestJSONOutput/unpause/Audit 0
156 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/stop/Command 18.39
160 TestJSONOutput/stop/Audit 0
162 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
164 TestErrorJSONOutput 7.83
166 TestKicCustomNetwork/create_custom_network 142.53
167 TestKicCustomNetwork/use_default_bridge_network 131.35
168 TestKicExistingNetwork 140.19
169 TestKicCustomSubnet 138.75
170 TestMainNoArgs 0.35
171 TestMinikubeProfile 302.91
174 TestMountStart/serial/StartWithMountFirst 53.09
175 TestMountStart/serial/VerifyMountFirst 6.46
176 TestMountStart/serial/StartWithMountSecond 54.07
177 TestMountStart/serial/VerifyMountSecond 6.45
178 TestMountStart/serial/DeleteFirst 17.93
179 TestMountStart/serial/VerifyMountPostDelete 6.47
180 TestMountStart/serial/Stop 9.04
181 TestMountStart/serial/RestartStopped 30.41
182 TestMountStart/serial/VerifyMountPostStop 6.48
185 TestMultiNode/serial/FreshStart2Nodes 274.93
186 TestMultiNode/serial/DeployApp2Nodes 30.58
187 TestMultiNode/serial/PingHostFrom2Pods 13.58
188 TestMultiNode/serial/AddNode 120.34
189 TestMultiNode/serial/ProfileList 7.17
190 TestMultiNode/serial/CopyFile 240.58
191 TestMultiNode/serial/StopNode 31.83
192 TestMultiNode/serial/StartAfterStop 57.71
194 TestMultiNode/serial/DeleteNode 43.31
195 TestMultiNode/serial/StopMultiNode 42.71
196 TestMultiNode/serial/RestartMultiNode 121.89
197 TestMultiNode/serial/ValidateNameConflict 148.73
201 TestPreload 340.84
202 TestScheduledStopWindows 231.69
206 TestInsufficientStorage 113.57
207 TestRunningBinaryUpgrade 428.2
209 TestKubernetesUpgrade 474.81
210 TestMissingContainerUpgrade 596.77
212 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
213 TestNoKubernetes/serial/StartWithK8s 174.62
214 TestNoKubernetes/serial/StartWithStopK8s 73.56
215 TestStoppedBinaryUpgrade/Setup 0.91
216 TestStoppedBinaryUpgrade/Upgrade 430.4
218 TestStoppedBinaryUpgrade/MinikubeLogs 12.19
227 TestPause/serial/Start 168.23
239 TestPause/serial/SecondStartNoReconfiguration 71.61
240 TestPause/serial/Pause 7.33
241 TestPause/serial/VerifyStatus 7.7
242 TestPause/serial/Unpause 7.07
244 TestStartStop/group/old-k8s-version/serial/FirstStart 212.28
245 TestPause/serial/PauseAgain 7.72
246 TestPause/serial/DeletePaused 25.27
247 TestPause/serial/VerifyDeletedResources 29.93
249 TestStartStop/group/no-preload/serial/FirstStart 199.58
251 TestStartStop/group/embed-certs/serial/FirstStart 189.76
253 TestStartStop/group/default-k8s-different-port/serial/FirstStart 161.08
254 TestStartStop/group/old-k8s-version/serial/DeployApp 16.67
255 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 7.51
256 TestStartStop/group/old-k8s-version/serial/Stop 20.43
257 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 8.49
258 TestStartStop/group/no-preload/serial/DeployApp 14.99
259 TestStartStop/group/old-k8s-version/serial/SecondStart 480.34
260 TestStartStop/group/embed-certs/serial/DeployApp 12.68
261 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 7.9
262 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 8.73
263 TestStartStop/group/no-preload/serial/Stop 22.04
264 TestStartStop/group/embed-certs/serial/Stop 21.35
265 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 8.06
266 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 7.84
267 TestStartStop/group/no-preload/serial/SecondStart 408.54
268 TestStartStop/group/embed-certs/serial/SecondStart 392.66
269 TestStartStop/group/default-k8s-different-port/serial/DeployApp 12.64
270 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 7.4
271 TestStartStop/group/default-k8s-different-port/serial/Stop 21.44
272 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 7.65
273 TestStartStop/group/default-k8s-different-port/serial/SecondStart 392.32
274 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.1
275 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 43.11
276 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.02
277 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 9.51
278 TestStartStop/group/embed-certs/serial/Pause 52.43
279 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.05
280 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.56
281 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 9.12
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.55
283 TestStartStop/group/old-k8s-version/serial/Pause 58.94
284 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 9.33
285 TestStartStop/group/no-preload/serial/Pause 63.34
286 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 45.18
287 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.54
289 TestStartStop/group/newest-cni/serial/FirstStart 172.42
290 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 7.66
292 TestNetworkPlugins/group/auto/Start 182.55
293 TestNetworkPlugins/group/kindnet/Start 192.44
295 TestStartStop/group/newest-cni/serial/DeployApp 0
296 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 7.75
297 TestStartStop/group/newest-cni/serial/Stop 20.76
298 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 8.49
299 TestStartStop/group/newest-cni/serial/SecondStart 96.95
300 TestNetworkPlugins/group/auto/KubeletFlags 7.47
301 TestNetworkPlugins/group/auto/NetCatPod 22
302 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
303 TestNetworkPlugins/group/auto/DNS 0.63
304 TestNetworkPlugins/group/kindnet/KubeletFlags 7.9
305 TestNetworkPlugins/group/auto/Localhost 0.58
306 TestNetworkPlugins/group/auto/HairPin 5.51
307 TestNetworkPlugins/group/kindnet/NetCatPod 26.16
309 TestNetworkPlugins/group/kindnet/DNS 0.85
310 TestNetworkPlugins/group/kindnet/Localhost 0.81
311 TestNetworkPlugins/group/kindnet/HairPin 1.11
312 TestNetworkPlugins/group/false/Start 438.01
313 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 9.47
317 TestNetworkPlugins/group/bridge/Start 419.32
318 TestNetworkPlugins/group/false/KubeletFlags 8.01
319 TestNetworkPlugins/group/false/NetCatPod 20.94
321 TestNetworkPlugins/group/enable-default-cni/Start 149.57
322 TestNetworkPlugins/group/bridge/KubeletFlags 8.03
323 TestNetworkPlugins/group/bridge/NetCatPod 21.02
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 7.68
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 29.76
327 TestNetworkPlugins/group/kubenet/Start 406.79
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.63
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.58
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.48
331 TestNetworkPlugins/group/kubenet/KubeletFlags 7.26
332 TestNetworkPlugins/group/kubenet/NetCatPod 20.15
x
+
TestDownloadOnly/v1.16.0/json-events (20.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220629175605-2408 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220629175605-2408 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (20.1276606s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (20.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220629175605-2408
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220629175605-2408: exit status 85 (448.6316ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------|-------------------|---------|---------------------|----------|
	| Command |               Args                | Profile  |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 17:56 GMT |          |
	|         | download-only-20220629175605-2408 |          |                   |         |                     |          |
	|         | --force --alsologtostderr         |          |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |          |                   |         |                     |          |
	|         | --container-runtime=docker        |          |                   |         |                     |          |
	|         | --driver=docker                   |          |                   |         |                     |          |
	|---------|-----------------------------------|----------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 17:56:07
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 17:56:07.606943    8160 out.go:296] Setting OutFile to fd 564 ...
	I0629 17:56:07.667939    8160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 17:56:07.667939    8160 out.go:309] Setting ErrFile to fd 568...
	I0629 17:56:07.667939    8160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 17:56:07.685955    8160 root.go:307] Error reading config file at C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0629 17:56:07.689942    8160 out.go:303] Setting JSON to true
	I0629 17:56:07.691943    8160 start.go:115] hostinfo: {"hostname":"minikube8","uptime":17930,"bootTime":1656507437,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 17:56:07.691943    8160 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 17:56:07.699938    8160 out.go:97] [download-only-20220629175605-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 17:56:07.699938    8160 notify.go:193] Checking for updates...
	W0629 17:56:07.699938    8160 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0629 17:56:07.705942    8160 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 17:56:07.710938    8160 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 17:56:07.718938    8160 out.go:169] MINIKUBE_LOCATION=14420
	I0629 17:56:07.723938    8160 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0629 17:56:07.730982    8160 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0629 17:56:07.731952    8160 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 17:56:10.444686    8160 docker.go:137] docker version: linux-20.10.16
	I0629 17:56:10.452204    8160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 17:56:12.504798    8160 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0525804s)
	I0629 17:56:12.506119    8160 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-29 17:56:11.5023182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 17:56:12.510035    8160 out.go:97] Using the docker driver based on user configuration
	I0629 17:56:12.510245    8160 start.go:284] selected driver: docker
	I0629 17:56:12.510332    8160 start.go:808] validating driver "docker" against <nil>
	I0629 17:56:12.524529    8160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 17:56:14.582009    8160 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0574659s)
	I0629 17:56:14.582273    8160 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-29 17:56:13.5669129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 17:56:14.582273    8160 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 17:56:14.706271    8160 start_flags.go:377] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0629 17:56:14.706440    8160 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0629 17:56:14.720307    8160 out.go:169] Using Docker Desktop driver with root privileges
	I0629 17:56:14.727531    8160 cni.go:95] Creating CNI manager for ""
	I0629 17:56:14.727863    8160 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 17:56:14.727863    8160 start_flags.go:310] config:
	{Name:download-only-20220629175605-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220629175605-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 17:56:14.732488    8160 out.go:97] Starting control plane node download-only-20220629175605-2408 in cluster download-only-20220629175605-2408
	I0629 17:56:14.732632    8160 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 17:56:14.746295    8160 out.go:97] Pulling base image ...
	I0629 17:56:14.746366    8160 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 17:56:14.746366    8160 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 17:56:14.790195    8160 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 17:56:14.790195    8160 cache.go:57] Caching tarball of preloaded images
	I0629 17:56:14.791148    8160 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 17:56:14.794125    8160 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0629 17:56:14.794125    8160 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0629 17:56:14.863397    8160 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 17:56:15.880563    8160 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 17:56:15.880595    8160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.32-1656350719-14420@sha256_e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar
	I0629 17:56:15.880595    8160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.32-1656350719-14420@sha256_e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar
	I0629 17:56:15.880595    8160 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory
	I0629 17:56:15.881369    8160 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 17:56:20.344936    8160 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0629 17:56:20.346045    8160 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0629 17:56:21.409549    8160 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0629 17:56:21.410569    8160 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\download-only-20220629175605-2408\config.json ...
	I0629 17:56:21.410569    8160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\download-only-20220629175605-2408\config.json: {Name:mk31771562b2a9ce6475a0c040734fdbc37e96eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 17:56:21.411837    8160 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 17:56:21.413034    8160 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220629175605-2408"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/json-events (15.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220629175605-2408 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220629175605-2408 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=docker --driver=docker: (15.2673029s)
--- PASS: TestDownloadOnly/v1.24.2/json-events (15.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/preload-exists
--- PASS: TestDownloadOnly/v1.24.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/kubectl
--- PASS: TestDownloadOnly/v1.24.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/LogsDuration (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220629175605-2408
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220629175605-2408: exit status 85 (664.3898ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------|-------------------|---------|---------------------|----------|
	| Command |               Args                | Profile  |       User        | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 17:56 GMT |          |
	|         | download-only-20220629175605-2408 |          |                   |         |                     |          |
	|         | --force --alsologtostderr         |          |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |          |                   |         |                     |          |
	|         | --container-runtime=docker        |          |                   |         |                     |          |
	|         | --driver=docker                   |          |                   |         |                     |          |
	| start   | -o=json --download-only -p        | minikube | minikube8\jenkins | v1.26.0 | 29 Jun 22 17:56 GMT |          |
	|         | download-only-20220629175605-2408 |          |                   |         |                     |          |
	|         | --force --alsologtostderr         |          |                   |         |                     |          |
	|         | --kubernetes-version=v1.24.2      |          |                   |         |                     |          |
	|         | --container-runtime=docker        |          |                   |         |                     |          |
	|         | --driver=docker                   |          |                   |         |                     |          |
	|---------|-----------------------------------|----------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 17:56:26
	Running on machine: minikube8
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 17:56:26.620700    7012 out.go:296] Setting OutFile to fd 684 ...
	I0629 17:56:26.683048    7012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 17:56:26.683048    7012 out.go:309] Setting ErrFile to fd 688...
	I0629 17:56:26.683048    7012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 17:56:26.701283    7012 root.go:307] Error reading config file at C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0629 17:56:26.702296    7012 out.go:303] Setting JSON to true
	I0629 17:56:26.704810    7012 start.go:115] hostinfo: {"hostname":"minikube8","uptime":17949,"bootTime":1656507437,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 17:56:26.704810    7012 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 17:56:26.710444    7012 out.go:97] [download-only-20220629175605-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 17:56:26.710849    7012 notify.go:193] Checking for updates...
	I0629 17:56:26.713993    7012 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 17:56:26.717643    7012 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 17:56:26.720859    7012 out.go:169] MINIKUBE_LOCATION=14420
	I0629 17:56:26.723485    7012 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0629 17:56:26.727386    7012 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0629 17:56:26.728816    7012 config.go:178] Loaded profile config "download-only-20220629175605-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0629 17:56:26.728816    7012 start.go:716] api.Load failed for download-only-20220629175605-2408: filestore "download-only-20220629175605-2408": Docker machine "download-only-20220629175605-2408" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0629 17:56:26.728816    7012 driver.go:360] Setting default libvirt URI to qemu:///system
	W0629 17:56:26.728816    7012 start.go:716] api.Load failed for download-only-20220629175605-2408: filestore "download-only-20220629175605-2408": Docker machine "download-only-20220629175605-2408" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0629 17:56:29.494766    7012 docker.go:137] docker version: linux-20.10.16
	I0629 17:56:29.502981    7012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 17:56:31.586297    7012 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0832198s)
	I0629 17:56:31.586591    7012 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-29 17:56:30.5684177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 17:56:31.591266    7012 out.go:97] Using the docker driver based on existing profile
	I0629 17:56:31.591342    7012 start.go:284] selected driver: docker
	I0629 17:56:31.591342    7012 start.go:808] validating driver "docker" against &{Name:download-only-20220629175605-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220629175605-2408 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath:}
	I0629 17:56:31.609472    7012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 17:56:33.661867    7012 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0523805s)
	I0629 17:56:33.661867    7012 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-29 17:56:32.6593821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 17:56:33.706725    7012 cni.go:95] Creating CNI manager for ""
	I0629 17:56:33.706725    7012 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 17:56:33.706725    7012 start_flags.go:310] config:
	{Name:download-only-20220629175605-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:download-only-20220629175605-2408 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 17:56:33.768804    7012 out.go:97] Starting control plane node download-only-20220629175605-2408 in cluster download-only-20220629175605-2408
	I0629 17:56:33.768804    7012 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 17:56:33.772146    7012 out.go:97] Pulling base image ...
	I0629 17:56:33.772215    7012 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 17:56:33.772321    7012 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 17:56:33.819770    7012 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 17:56:33.819770    7012 cache.go:57] Caching tarball of preloaded images
	I0629 17:56:33.820303    7012 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 17:56:33.927271    7012 out.go:97] Downloading Kubernetes v1.24.2 preload ...
	I0629 17:56:33.928325    7012 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 ...
	I0629 17:56:33.987810    7012 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4?checksum=md5:015c5bcd220ede3ee64238beb9734721 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 17:56:34.921923    7012 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 17:56:34.921923    7012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.32-1656350719-14420@sha256_e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar
	I0629 17:56:34.922683    7012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.32-1656350719-14420@sha256_e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e.tar
	I0629 17:56:34.922683    7012 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory
	I0629 17:56:34.922775    7012 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory, skipping pull
	I0629 17:56:34.922775    7012 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in cache, skipping pull
	I0629 17:56:34.922775    7012 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220629175605-2408"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.2/LogsDuration (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.77s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.7697587s)
--- PASS: TestDownloadOnly/DeleteAll (11.77s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (7.41s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220629175605-2408
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220629175605-2408: (7.4066853s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (7.41s)

                                                
                                    
x
+
TestDownloadOnlyKic (46.33s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220629175708-2408 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220629175708-2408 --force --alsologtostderr --driver=docker: (36.4337443s)
helpers_test.go:175: Cleaning up "download-docker-20220629175708-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220629175708-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220629175708-2408: (8.6965844s)
--- PASS: TestDownloadOnlyKic (46.33s)

                                                
                                    
x
+
TestBinaryMirror (17.09s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220629175755-2408 --alsologtostderr --binary-mirror http://127.0.0.1:52482 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220629175755-2408 --alsologtostderr --binary-mirror http://127.0.0.1:52482 --driver=docker: (8.3995671s)
helpers_test.go:175: Cleaning up "binary-mirror-20220629175755-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220629175755-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220629175755-2408: (8.4569792s)
--- PASS: TestBinaryMirror (17.09s)

                                                
                                    
x
+
TestOffline (217.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220629195545-2408 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20220629195545-2408 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m1.0119576s)
helpers_test.go:175: Cleaning up "offline-docker-20220629195545-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220629195545-2408
E0629 19:58:54.265788    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220629195545-2408: (36.4906285s)
--- PASS: TestOffline (217.50s)

                                                
                                    
x
+
TestAddons/Setup (424.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220629175812-2408 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20220629175812-2408 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m4.6798615s)
--- PASS: TestAddons/Setup (424.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (13.13s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 22.0152ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-8595bd7d4c-qzrjp" [11397aa9-18b1-47a8-9993-30ed410e0a9c] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0424423s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220629175812-2408 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable metrics-server --alsologtostderr -v=1: (7.6835261s)
--- PASS: TestAddons/parallel/MetricsServer (13.13s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (35.78s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 22.0152ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-bjclw" [013ba2b0-be26-43f1-be3a-17bb5fac999d] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0402385s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220629175812-2408 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220629175812-2408 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (24.0430508s)
addons_test.go:442: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:442: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable helm-tiller --alsologtostderr -v=1: (6.6543687s)
--- PASS: TestAddons/parallel/HelmTiller (35.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (86.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 27.4445ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220629175812-2408 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629175812-2408 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629175812-2408 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220629175812-2408 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [406815f0-1c4b-4188-a164-e2957f5f409b] Pending
helpers_test.go:342: "task-pv-pod" [406815f0-1c4b-4188-a164-e2957f5f409b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [406815f0-1c4b-4188-a164-e2957f5f409b] Running
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 36.0789947s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220629175812-2408 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220629175812-2408 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220629175812-2408 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220629175812-2408 delete pod task-pv-pod
addons_test.go:546: (dbg) Done: kubectl --context addons-20220629175812-2408 delete pod task-pv-pod: (1.4792006s)
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220629175812-2408 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220629175812-2408 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629175812-2408 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220629175812-2408 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [eeba6204-644f-411e-bb07-9e7fbce836f7] Pending
helpers_test.go:342: "task-pv-pod-restore" [eeba6204-644f-411e-bb07-9e7fbce836f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [eeba6204-644f-411e-bb07-9e7fbce836f7] Running
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 18.0626637s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220629175812-2408 delete pod task-pv-pod-restore
addons_test.go:578: (dbg) Done: kubectl --context addons-20220629175812-2408 delete pod task-pv-pod-restore: (2.0593047s)
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220629175812-2408 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220629175812-2408 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable csi-hostpath-driver --alsologtostderr -v=1: (14.230039s)
addons_test.go:594: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:594: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable volumesnapshots --alsologtostderr -v=1: (5.8976035s)
--- PASS: TestAddons/parallel/CSI (86.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (36.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-20220629175812-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-20220629175812-2408 --alsologtostderr -v=1: (7.4521063s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-f77kg" [082e28ef-18a7-468f-b310-4ba2eca93993] Pending
helpers_test.go:342: "headlamp-866f5bd7bc-f77kg" [082e28ef-18a7-468f-b310-4ba2eca93993] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-f77kg" [082e28ef-18a7-468f-b310-4ba2eca93993] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-866f5bd7bc-f77kg" [082e28ef-18a7-468f-b310-4ba2eca93993] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 29.2367496s
--- PASS: TestAddons/parallel/Headlamp (36.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (28.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220629175812-2408 create -f testdata\busybox.yaml
addons_test.go:605: (dbg) Done: kubectl --context addons-20220629175812-2408 create -f testdata\busybox.yaml: (1.700298s)
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220629175812-2408 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [a26272b9-128c-47a6-a868-aaf42f137958] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [a26272b9-128c-47a6-a868-aaf42f137958] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.0249181s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220629175812-2408 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220629175812-2408 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220629175812-2408 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220629175812-2408 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220629175812-2408 addons disable gcp-auth --alsologtostderr -v=1: (14.7307636s)
--- PASS: TestAddons/serial/GCPAuth (28.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (24.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20220629175812-2408
addons_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20220629175812-2408: (18.7203067s)
addons_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220629175812-2408
addons_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220629175812-2408: (3.0246394s)
addons_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220629175812-2408
addons_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220629175812-2408: (2.9726425s)
--- PASS: TestAddons/StoppedEnableDisable (24.72s)

                                                
                                    
x
+
TestCertOptions (183.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220629200823-2408 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20220629200823-2408 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (2m24.9568266s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220629200823-2408 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20220629200823-2408 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (8.7673796s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220629200823-2408 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-20220629200823-2408 -- "sudo cat /etc/kubernetes/admin.conf": (8.9706913s)
helpers_test.go:175: Cleaning up "cert-options-20220629200823-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220629200823-2408

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220629200823-2408: (19.3608153s)
--- PASS: TestCertOptions (183.34s)

                                                
                                    
x
+
TestCertExpiration (428.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220629200714-2408 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220629200714-2408 --memory=2048 --cert-expiration=3m --driver=docker: (2m31.148325s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220629200714-2408 --memory=2048 --cert-expiration=8760h --driver=docker
E0629 20:13:54.281962    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220629200714-2408 --memory=2048 --cert-expiration=8760h --driver=docker: (1m9.3404515s)
helpers_test.go:175: Cleaning up "cert-expiration-20220629200714-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220629200714-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220629200714-2408: (27.6295083s)
--- PASS: TestCertExpiration (428.13s)

                                                
                                    
x
+
TestDockerFlags (208.62s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220629195545-2408 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20220629195545-2408 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (2m40.2739956s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220629195545-2408 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220629195545-2408 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.3460356s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220629195545-2408 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220629195545-2408 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.7111177s)
helpers_test.go:175: Cleaning up "docker-flags-20220629195545-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220629195545-2408

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220629195545-2408: (29.2928346s)
--- PASS: TestDockerFlags (208.62s)

                                                
                                    
x
+
TestForceSystemdFlag (206.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220629195545-2408 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220629195545-2408 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m46.5660651s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220629195545-2408 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20220629195545-2408 ssh "docker info --format {{.CgroupDriver}}": (10.275395s)
helpers_test.go:175: Cleaning up "force-systemd-flag-20220629195545-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220629195545-2408

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220629195545-2408: (29.2128816s)
--- PASS: TestForceSystemdFlag (206.06s)

                                                
                                    
x
+
TestForceSystemdEnv (172.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220629200933-2408 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20220629200933-2408 --memory=2048 --alsologtostderr -v=5 --driver=docker: (2m17.4432351s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220629200933-2408 ssh "docker info --format {{.CgroupDriver}}"
E0629 20:11:54.800398    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20220629200933-2408 ssh "docker info --format {{.CgroupDriver}}": (7.5976013s)
helpers_test.go:175: Cleaning up "force-systemd-env-20220629200933-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220629200933-2408

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220629200933-2408: (27.1991136s)
--- PASS: TestForceSystemdEnv (172.24s)

                                                
                                    
x
+
TestErrorSpam/setup (115.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220629180837-2408 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 --driver=docker
E0629 18:10:17.093933    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:17.109158    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:17.124344    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:17.155013    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:17.202639    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:17.295827    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:17.456553    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:17.783256    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:18.427527    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:19.709583    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:22.270343    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:10:27.403813    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
error_spam_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20220629180837-2408 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 --driver=docker: (1m55.2488543s)
error_spam_test.go:88: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.24.2."
--- PASS: TestErrorSpam/setup (115.25s)

                                                
                                    
x
+
TestErrorSpam/start (22.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 start --dry-run
E0629 18:10:37.658883    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 start --dry-run: (7.5022322s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 start --dry-run: (7.4859934s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 start --dry-run
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 start --dry-run: (7.3481599s)
--- PASS: TestErrorSpam/start (22.34s)

                                                
                                    
x
+
TestErrorSpam/status (20.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 status
E0629 18:10:58.143542    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 status: (6.7099766s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 status
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 status: (6.6966881s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 status
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 status: (6.7107937s)
--- PASS: TestErrorSpam/status (20.12s)

                                                
                                    
x
+
TestErrorSpam/pause (17.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 pause: (6.2942805s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 pause: (5.7195116s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 pause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 pause: (5.7046802s)
--- PASS: TestErrorSpam/pause (17.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (18.19s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 unpause
E0629 18:11:39.110280    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 unpause: (6.3796113s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 unpause: (5.9161604s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 unpause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 unpause: (5.8886749s)
--- PASS: TestErrorSpam/unpause (18.19s)

                                                
                                    
x
+
TestErrorSpam/stop (34.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 stop: (18.4285985s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 stop: (7.852396s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 stop
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220629180837-2408 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220629180837-2408 stop: (7.9119076s)
--- PASS: TestErrorSpam/stop (34.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\test\nested\copy\2408\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (130.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0629 18:13:01.034890    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
functional_test.go:2160: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (2m10.4136856s)
--- PASS: TestFunctional/serial/StartWithProxy (130.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (63.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --alsologtostderr -v=8
E0629 18:15:17.093829    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:15:44.881297    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
functional_test.go:651: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --alsologtostderr -v=8: (1m3.4805087s)
functional_test.go:655: soft start took 1m3.4815369s for "functional-20220629181245-2408" cluster.
--- PASS: TestFunctional/serial/SoftStart (63.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.18s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220629181245-2408 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (18.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add k8s.gcr.io/pause:3.1: (6.1967255s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add k8s.gcr.io/pause:3.3: (6.0810294s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add k8s.gcr.io/pause:latest: (6.4093964s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (18.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220629181245-2408 C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local539395431\001
functional_test.go:1069: (dbg) Done: docker build -t minikube-local-cache-test:functional-20220629181245-2408 C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local539395431\001: (2.3097003s)
functional_test.go:1081: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add minikube-local-cache-test:functional-20220629181245-2408
functional_test.go:1081: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache add minikube-local-cache-test:functional-20220629181245-2408: (5.7713093s)
functional_test.go:1086: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache delete minikube-local-cache-test:functional-20220629181245-2408
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220629181245-2408
functional_test.go:1075: (dbg) Done: docker rmi minikube-local-cache-test:functional-20220629181245-2408: (1.1159769s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo crictl images
functional_test.go:1116: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo crictl images: (6.5653611s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (26.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1139: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo docker rmi k8s.gcr.io/pause:latest: (6.647327s)
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (6.5231526s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cache reload: (6.3578933s)
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1155: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (6.4867537s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (26.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 kubectl -- --context functional-20220629181245-2408 get pods
functional_test.go:708: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 kubectl -- --context functional-20220629181245-2408 get pods: (2.1455202s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220629181245-2408 get pods
functional_test.go:733: (dbg) Done: out\kubectl.exe --context functional-20220629181245-2408 get pods: (2.1056642s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (88.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m28.0898545s)
functional_test.go:753: restart took 1m28.0899875s for "functional-20220629181245-2408" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (88.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220629181245-2408 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs
functional_test.go:1228: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs: (7.8202194s)
--- PASS: TestFunctional/serial/LogsCmd (7.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs --file C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3959099106\001\logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 logs --file C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3959099106\001\logs.txt: (9.0598879s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config get cpus: exit status 14 (392.0694ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 config get cpus: exit status 14 (346.6289ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (12.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.4625374s)

                                                
                                                
-- stdout --
	* [functional-20220629181245-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:20:51.139457    7300 out.go:296] Setting OutFile to fd 276 ...
	I0629 18:20:51.210168    7300 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:20:51.210168    7300 out.go:309] Setting ErrFile to fd 756...
	I0629 18:20:51.210168    7300 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:20:51.231962    7300 out.go:303] Setting JSON to false
	I0629 18:20:51.239647    7300 start.go:115] hostinfo: {"hostname":"minikube8","uptime":19413,"bootTime":1656507438,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 18:20:51.240255    7300 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 18:20:51.244732    7300 out.go:177] * [functional-20220629181245-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 18:20:51.247178    7300 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 18:20:51.249248    7300 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 18:20:51.251773    7300 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:20:51.254326    7300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:20:51.259782    7300 config.go:178] Loaded profile config "functional-20220629181245-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 18:20:51.260437    7300 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:20:54.056521    7300 docker.go:137] docker version: linux-20.10.16
	I0629 18:20:54.067045    7300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:20:56.155209    7300 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0880164s)
	I0629 18:20:56.155457    7300 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-29 18:20:55.1260717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:20:56.160714    7300 out.go:177] * Using the docker driver based on existing profile
	I0629 18:20:56.163849    7300 start.go:284] selected driver: docker
	I0629 18:20:56.163849    7300 start.go:808] validating driver "docker" against &{Name:functional-20220629181245-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629181245-2408 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:20:56.163936    7300 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:20:56.317790    7300 out.go:177] 
	W0629 18:20:56.320246    7300 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0629 18:20:56.323602    7300 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --dry-run --alsologtostderr -v=1 --driver=docker: (7.4804343s)
--- PASS: TestFunctional/parallel/DryRun (12.94s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220629181245-2408 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.3717543s)

                                                
                                                
-- stdout --
	* [functional-20220629181245-2408] minikube v1.26.0 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:20:46.454419    7324 out.go:296] Setting OutFile to fd 620 ...
	I0629 18:20:46.517209    7324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:20:46.517209    7324 out.go:309] Setting ErrFile to fd 640...
	I0629 18:20:46.517209    7324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:20:46.541562    7324 out.go:303] Setting JSON to false
	I0629 18:20:46.544576    7324 start.go:115] hostinfo: {"hostname":"minikube8","uptime":19409,"bootTime":1656507437,"procs":161,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0629 18:20:46.544576    7324 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 18:20:46.567557    7324 out.go:177] * [functional-20220629181245-2408] minikube v1.26.0 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0629 18:20:46.574005    7324 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0629 18:20:46.577267    7324 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0629 18:20:46.578921    7324 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:20:46.585758    7324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:20:46.588921    7324 config.go:178] Loaded profile config "functional-20220629181245-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 18:20:46.589867    7324 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:20:49.316925    7324 docker.go:137] docker version: linux-20.10.16
	I0629 18:20:49.327023    7324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:20:51.441200    7324 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1141166s)
	I0629 18:20:51.441746    7324 info.go:265] docker info: {ID:VJVR:6YY6:UKEE:XATC:6Q6V:NGKO:HJYJ:6DZU:XSSE:RWAS:HORE:QFLL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-06-29 18:20:50.3741076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:20:51.453259    7324 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0629 18:20:51.461616    7324 start.go:284] selected driver: docker
	I0629 18:20:51.461616    7324 start.go:808] validating driver "docker" against &{Name:functional-20220629181245-2408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629181245-2408 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:20:51.461910    7324 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:20:51.539112    7324 out.go:177] 
	W0629 18:20:51.541610    7324 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0629 18:20:51.543957    7324 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.37s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (20.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 status: (6.9176392s)
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (7.1118894s)
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 status -o json: (6.8228796s)
--- PASS: TestFunctional/parallel/StatusCmd (20.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 addons list: (3.3540883s)
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (57.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [8af8ab2e-2574-449c-9979-c35eac205bcb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0849401s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220629181245-2408 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220629181245-2408 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220629181245-2408 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220629181245-2408 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220629181245-2408 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [80abb961-e8fb-4b10-888e-07041c970642] Pending
helpers_test.go:342: "sp-pod" [80abb961-e8fb-4b10-888e-07041c970642] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [80abb961-e8fb-4b10-888e-07041c970642] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.0611404s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:100: (dbg) Done: kubectl --context functional-20220629181245-2408 exec sp-pod -- touch /tmp/mount/foo: (1.1036262s)
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220629181245-2408 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220629181245-2408 delete -f testdata/storage-provisioner/pod.yaml: (4.21398s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220629181245-2408 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Done: kubectl --context functional-20220629181245-2408 apply -f testdata/storage-provisioner/pod.yaml: (1.0767003s)
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [a43b54bd-e9b2-405a-a1ce-fe921f7596f3] Pending
helpers_test.go:342: "sp-pod" [a43b54bd-e9b2-405a-a1ce-fe921f7596f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a43b54bd-e9b2-405a-a1ce-fe921f7596f3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.0230583s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (57.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (15.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "echo hello": (8.2581443s)
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "cat /etc/hostname": (7.2487735s)
--- PASS: TestFunctional/parallel/SSHCmd (15.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (27.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cp testdata\cp-test.txt /home/docker/cp-test.txt: (5.9013317s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh -n functional-20220629181245-2408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh -n functional-20220629181245-2408 "sudo cat /home/docker/cp-test.txt": (7.5019036s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cp functional-20220629181245-2408:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalparallelCpCmd3231594891\001\cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 cp functional-20220629181245-2408:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalparallelCpCmd3231594891\001\cp-test.txt: (7.5025018s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh -n functional-20220629181245-2408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh -n functional-20220629181245-2408 "sudo cat /home/docker/cp-test.txt": (6.7287077s)
--- PASS: TestFunctional/parallel/CpCmd (27.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (74.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220629181245-2408 replace --force -f testdata\mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-67f7d69d8b-b2279" [f0829689-622a-4c53-84e4-00e90b285721] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-b2279" [f0829689-622a-4c53-84e4-00e90b285721] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 49.0859583s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;": exit status 1 (510.7661ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;": exit status 1 (491.713ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;": exit status 1 (1.1145595s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;": exit status 1 (514.4031ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;": exit status 1 (591.0849ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;": exit status 1 (582.0563ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629181245-2408 exec mysql-67f7d69d8b-b2279 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (74.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (6.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/2408/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/test/nested/copy/2408/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/test/nested/copy/2408/hosts": (6.6514076s)
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (6.65s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (39.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/2408.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/2408.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/2408.pem": (6.5440142s)
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/2408.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /usr/share/ca-certificates/2408.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /usr/share/ca-certificates/2408.pem": (6.5858876s)
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/51391683.0": (6.5596492s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/24082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/24082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/24082.pem": (6.4533832s)
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/24082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /usr/share/ca-certificates/24082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /usr/share/ca-certificates/24082.pem": (6.4902319s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (6.4818659s)
--- PASS: TestFunctional/parallel/CertSync (39.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220629181245-2408 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh "sudo systemctl is-active crio": exit status 1 (6.5371581s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (31.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220629181245-2408 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220629181245-2408"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220629181245-2408 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220629181245-2408": (19.2450476s)
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220629181245-2408 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220629181245-2408 docker-env | Invoke-Expression ; docker images": (11.8306716s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (31.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format short: (4.401063s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20220629181245-2408
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220629181245-2408
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (4.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format table: (4.4358309s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-20220629181245-2408 | 6fcbfdcb5de95 | 1.24MB |
| k8s.gcr.io/etcd                             | 3.5.3-0                        | aebe758cef4cd | 299MB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220629181245-2408 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7                            | efa50097efbde | 462MB  |
| docker.io/library/nginx                     | alpine                         | f246e6f9d0b28 | 23.5MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| k8s.gcr.io/kube-apiserver                   | v1.24.2                        | d3377ffb7177c | 130MB  |
| k8s.gcr.io/kube-proxy                       | v1.24.2                        | a634548d10b03 | 110MB  |
| k8s.gcr.io/pause                            | 3.7                            | 221177c6082a8 | 711kB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/busybox                 | latest                         | beae173ccac6a | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-20220629181245-2408 | eaa47fce7da75 | 30B    |
| docker.io/library/nginx                     | latest                         | 55f4b40fe486a | 142MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.2                        | 5d725196c1f47 | 51MB   |
| k8s.gcr.io/kube-controller-manager          | v1.24.2                        | 34cdf99b1bb3b | 119MB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format json: (4.3386966s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format json:
[{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220629181245-2408"],"size":"32900000"},{"id":"eaa47fce7da755afc6180a6fdb0e8eba0f8366427f07a68691e5fe3c134f58e1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220629181245-2408"],"size":"30"},{"id":"34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.2"],"size":"119000000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"7
11000"},{"id":"a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.2"],"size":"110000000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"299000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23500000"},{"id":"5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.2"],"size":"51000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/
k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.2"],"size":"130000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"6fcbfdcb5de959e228b5602c7440cda72091b986fa3c444cc4ac4395473c120b","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220629181245-2408"],"size":"1240000"},{"id":"efa50097efbdef5884e5ebaba4da5899e79609b78cd4fe91b365d5d9d3205188","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"55f4b40fe486a5b734b46bb7bf28f52fa31426
bf23be068c8e7b19e58d9b8deb","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (4.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format yaml: (4.4767882s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls --format yaml:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: eaa47fce7da755afc6180a6fdb0e8eba0f8366427f07a68691e5fe3c134f58e1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220629181245-2408
size: "30"
- id: a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.2
size: "110000000"
- id: 5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.2
size: "51000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: efa50097efbdef5884e5ebaba4da5899e79609b78cd4fe91b365d5d9d3205188
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 55f4b40fe486a5b734b46bb7bf28f52fa31426bf23be068c8e7b19e58d9b8deb
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.2
size: "130000000"
- id: 34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.2
size: "119000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220629181245-2408
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (20.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 ssh pgrep buildkitd: exit status 1 (6.5391981s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image build -t localhost/my-image:functional-20220629181245-2408 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image build -t localhost/my-image:functional-20220629181245-2408 testdata\build: (9.2552397s)
functional_test.go:315: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image build -t localhost/my-image:functional-20220629181245-2408 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in ff29731e5d65
Removing intermediate container ff29731e5d65
---> 798779beb383
Step 3/3 : ADD content.txt /
---> 6fcbfdcb5de9
Successfully built 6fcbfdcb5de9
Successfully tagged localhost/my-image:functional-20220629181245-2408
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls: (4.4290952s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (20.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.0322192s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220629181245-2408
functional_test.go:342: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (1.3451591s)
--- PASS: TestFunctional/parallel/ImageCommands/Setup (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220629181245-2408 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220629181245-2408 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [e21fe8b3-2db6-4885-ad25-f7dbd92d9f61] Pending
helpers_test.go:342: "nginx-svc" [e21fe8b3-2db6-4885-ad25-f7dbd92d9f61] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [e21fe8b3-2db6-4885-ad25-f7dbd92d9f61] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.1350332s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (20.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (16.0970619s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls: (4.4802091s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (20.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (15.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (10.8697794s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls: (5.1006451s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (15.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.7009904s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220629181245-2408
functional_test.go:235: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (1.4318629s)
functional_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (16.2692498s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls: (4.3744797s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image save gcr.io/google-containers/addon-resizer:functional-20220629181245-2408 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image save gcr.io/google-containers/addon-resizer:functional-20220629181245-2408 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (8.2509512s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.0738879s)
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.259567s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (9.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image rm gcr.io/google-containers/addon-resizer:functional-20220629181245-2408

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image rm gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (4.4309036s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls: (4.7406181s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (9.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.8238037s)
functional_test.go:1310: Took "6.8238037s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1324: Took "388.0358ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (9.7343636s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image ls: (4.5373367s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (7.3029507s)
functional_test.go:1361: Took "7.303119s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1374: Took "374.4024ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220629181245-2408
functional_test.go:414: (dbg) Done: docker rmi gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (1.1343417s)
functional_test.go:419: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (10.5279539s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220629181245-2408
functional_test.go:424: (dbg) Done: docker image inspect gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: (1.1104342s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 update-context --alsologtostderr -v=2: (4.2378576s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 update-context --alsologtostderr -v=2: (4.1083889s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 update-context --alsologtostderr -v=2: (4.189221s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 version --short
--- PASS: TestFunctional/parallel/Version/short (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220629181245-2408 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220629181245-2408 version -o=json --components: (6.231034s)
E0629 18:25:17.104885    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/Version/components (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220629181245-2408 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 7088: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220629181245-2408
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220629181245-2408: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:functional-20220629181245-2408" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220629181245-2408": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220629181245-2408
functional_test.go:193: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-20220629181245-2408: context deadline exceeded (0s)
functional_test.go:195: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-20220629181245-2408": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220629181245-2408
functional_test.go:201: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-20220629181245-2408: context deadline exceeded (923.5µs)
functional_test.go:203: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-20220629181245-2408": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (145.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220629185334-2408 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0629 18:53:54.240628    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:54.255785    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:54.271200    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:54.301845    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:54.347894    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:54.442719    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:54.616832    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:54.947133    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:55.603507    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:56.897039    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:53:59.467756    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:54:04.591253    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:54:14.836453    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:54:35.318973    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:55:16.283200    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:55:17.113283    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220629185334-2408 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m25.1729528s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (145.17s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (50.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220629185334-2408 addons enable ingress --alsologtostderr -v=5
E0629 18:56:38.213133    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220629185334-2408 addons enable ingress --alsologtostderr -v=5: (50.0220494s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (50.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220629185334-2408 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220629185334-2408 addons enable ingress-dns --alsologtostderr -v=5: (4.7793184s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (139.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220629185804-2408 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0629 18:58:54.253564    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 18:59:22.065496    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 19:00:00.284343    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:00:17.123728    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20220629185804-2408 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (2m19.6953069s)
--- PASS: TestJSONOutput/start/Command (139.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.2s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220629185804-2408 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20220629185804-2408 --output=json --user=testUser: (6.2012156s)
--- PASS: TestJSONOutput/pause/Command (6.20s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (6.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220629185804-2408 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20220629185804-2408 --output=json --user=testUser: (6.2208654s)
--- PASS: TestJSONOutput/unpause/Command (6.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (18.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220629185804-2408 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20220629185804-2408 --output=json --user=testUser: (18.3855592s)
--- PASS: TestJSONOutput/stop/Command (18.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.83s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220629190115-2408 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220629190115-2408 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (391.9319ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db92eff2-ce66-46c2-9344-21d1f896b294","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220629190115-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"159651fe-cf09-4d59-bd01-8b121cce8089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube8\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"56a5d9bc-8066-436c-b704-2eb62f65faef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2baf216f-6c3c-4229-8972-b7dda2a6bd6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14420"}}
	{"specversion":"1.0","id":"4f4f3082-e0c9-4c5f-ac11-a6b34c8fd41b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5343fe41-4675-4b4e-8050-913c32a5229f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220629190115-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220629190115-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220629190115-2408: (7.4406478s)
--- PASS: TestErrorJSONOutput (7.83s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (142.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220629190123-2408 --network=
E0629 19:01:54.778440    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:54.794258    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:54.810036    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:54.841268    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:54.887611    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:54.981122    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:55.155270    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:55.488799    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:56.137425    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:57.428846    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:01:59.997449    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:02:05.122295    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:02:15.364830    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:02:35.846469    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:03:16.809699    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220629190123-2408 --network=: (1m59.4393665s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0931575s)
helpers_test.go:175: Cleaning up "docker-network-20220629190123-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220629190123-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220629190123-2408: (21.9845033s)
--- PASS: TestKicCustomNetwork/create_custom_network (142.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (131.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220629190345-2408 --network=bridge
E0629 19:03:54.258747    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 19:04:38.736071    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:05:17.118693    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220629190345-2408 --network=bridge: (1m53.1766096s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.1029817s)
helpers_test.go:175: Cleaning up "docker-network-20220629190345-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220629190345-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220629190345-2408: (17.0589065s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (131.35s)

                                                
                                    
x
+
TestKicExistingNetwork (140.19s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0850023s)
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20220629190601-2408 --network=existing-network
E0629 19:06:54.778433    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:07:22.586644    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20220629190601-2408 --network=existing-network: (1m56.4913988s)
helpers_test.go:175: Cleaning up "existing-network-20220629190601-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20220629190601-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20220629190601-2408: (16.8881767s)
--- PASS: TestKicExistingNetwork (140.19s)

                                                
                                    
x
+
TestKicCustomSubnet (138.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220629190817-2408 --subnet=192.168.60.0/24
E0629 19:08:54.246931    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220629190817-2408 --subnet=192.168.60.0/24: (1m55.6649189s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220629190817-2408 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Done: docker network inspect custom-subnet-20220629190817-2408 --format "{{(index .IPAM.Config 0).Subnet}}": (1.0709507s)
helpers_test.go:175: Cleaning up "custom-subnet-20220629190817-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220629190817-2408
E0629 19:10:17.118722    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:10:17.442134    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220629190817-2408: (22.0014357s)
--- PASS: TestKicCustomSubnet (138.75s)

                                                
                                    
x
+
TestMainNoArgs (0.35s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.35s)

                                                
                                    
x
+
TestMinikubeProfile (302.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-20220629191036-2408 --driver=docker
E0629 19:11:54.783702    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-20220629191036-2408 --driver=docker: (1m57.9637352s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-20220629191036-2408 --driver=docker
E0629 19:13:54.274740    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-20220629191036-2408 --driver=docker: (1m55.3518412s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-20220629191036-2408
minikube_profile_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe profile first-20220629191036-2408: (3.0521993s)
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (10.7170989s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-20220629191036-2408
minikube_profile_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe profile second-20220629191036-2408: (3.0433172s)
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (10.3965375s)
helpers_test.go:175: Cleaning up "second-20220629191036-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-20220629191036-2408
E0629 19:15:17.128492    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-20220629191036-2408: (20.8857309s)
helpers_test.go:175: Cleaning up "first-20220629191036-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-20220629191036-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-20220629191036-2408: (21.5012551s)
--- PASS: TestMinikubeProfile (302.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (53.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220629191539-2408 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-20220629191539-2408 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (52.080136s)
--- PASS: TestMountStart/serial/StartWithMountFirst (53.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (6.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20220629191539-2408 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-20220629191539-2408 ssh -- ls /minikube-host: (6.4643859s)
--- PASS: TestMountStart/serial/VerifyMountFirst (6.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (54.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220629191539-2408 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E0629 19:16:40.291943    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:16:54.772897    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220629191539-2408 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (53.0573132s)
--- PASS: TestMountStart/serial/StartWithMountSecond (54.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220629191539-2408 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220629191539-2408 ssh -- ls /minikube-host: (6.4487708s)
--- PASS: TestMountStart/serial/VerifyMountSecond (6.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (17.93s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20220629191539-2408 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20220629191539-2408 --alsologtostderr -v=5: (17.9294182s)
--- PASS: TestMountStart/serial/DeleteFirst (17.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220629191539-2408 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220629191539-2408 ssh -- ls /minikube-host: (6.4688264s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (6.47s)

                                                
                                    
x
+
TestMountStart/serial/Stop (9.04s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20220629191539-2408
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-20220629191539-2408: (9.0398302s)
--- PASS: TestMountStart/serial/Stop (9.04s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (30.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220629191539-2408
E0629 19:18:17.953733    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220629191539-2408: (29.4044047s)
--- PASS: TestMountStart/serial/RestartStopped (30.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (6.48s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220629191539-2408 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220629191539-2408 ssh -- ls /minikube-host: (6.4800697s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (6.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (274.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0629 19:20:17.127771    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:21:54.786563    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (4m24.2080298s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr: (10.720045s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (274.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (30.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (3.139669s)
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- rollout status deployment/busybox
E0629 19:23:54.253154    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- rollout status deployment/busybox: (3.6392569s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- get pods -o jsonpath='{.items[*].status.podIP}': (2.5519422s)
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- get pods -o jsonpath='{.items[*].metadata.name}': (2.5427831s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- nslookup kubernetes.io: (3.9517477s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- nslookup kubernetes.io: (3.7435982s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- nslookup kubernetes.default: (2.7165617s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- nslookup kubernetes.default: (2.8003339s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- nslookup kubernetes.default.svc.cluster.local: (2.7710582s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- nslookup kubernetes.default.svc.cluster.local: (2.716975s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (30.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (13.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- get pods -o jsonpath='{.items[*].metadata.name}': (2.5128087s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.7372347s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-dnbhx -- sh -c "ping -c 1 192.168.65.2": (2.7731303s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.7303732s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220629191914-2408 -- exec busybox-d46db594c-rbqbj -- sh -c "ping -c 1 192.168.65.2": (2.8301207s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (13.58s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (120.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220629191914-2408 -v 3 --alsologtostderr
E0629 19:25:17.132257    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20220629191914-2408 -v 3 --alsologtostderr: (1m45.613272s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr: (14.7263857s)
--- PASS: TestMultiNode/serial/AddNode (120.34s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.1742091s)
--- PASS: TestMultiNode/serial/ProfileList (7.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (240.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --output json --alsologtostderr
E0629 19:26:54.785963    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --output json --alsologtostderr: (14.4805735s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp testdata\cp-test.txt multinode-20220629191914-2408:/home/docker/cp-test.txt
E0629 19:26:57.460395    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp testdata\cp-test.txt multinode-20220629191914-2408:/home/docker/cp-test.txt: (7.0785584s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt": (7.0140539s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408.txt: (7.0907198s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt": (7.0543685s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408:/home/docker/cp-test.txt multinode-20220629191914-2408-m02:/home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408:/home/docker/cp-test.txt multinode-20220629191914-2408-m02:/home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m02.txt: (9.6199689s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt": (7.047161s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m02.txt": (7.0005615s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408:/home/docker/cp-test.txt multinode-20220629191914-2408-m03:/home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408:/home/docker/cp-test.txt multinode-20220629191914-2408-m03:/home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m03.txt: (9.6320456s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test.txt": (7.0961857s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408_multinode-20220629191914-2408-m03.txt": (6.9567557s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp testdata\cp-test.txt multinode-20220629191914-2408-m02:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp testdata\cp-test.txt multinode-20220629191914-2408-m02:/home/docker/cp-test.txt: (7.1099855s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt": (7.0857702s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408-m02.txt: (7.0503768s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt": (7.0140274s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt multinode-20220629191914-2408:/home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt multinode-20220629191914-2408:/home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408.txt: (9.4484087s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt"
E0629 19:28:54.258330    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt": (7.0941686s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408.txt": (7.0436207s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt multinode-20220629191914-2408-m03:/home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m02:/home/docker/cp-test.txt multinode-20220629191914-2408-m03:/home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408-m03.txt: (9.4468968s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test.txt": (7.0273462s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m02_multinode-20220629191914-2408-m03.txt": (7.0385347s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp testdata\cp-test.txt multinode-20220629191914-2408-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp testdata\cp-test.txt multinode-20220629191914-2408-m03:/home/docker/cp-test.txt: (7.1001671s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt": (7.0206303s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile3649564904\001\cp-test_multinode-20220629191914-2408-m03.txt: (7.0534451s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt": (7.0397634s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt multinode-20220629191914-2408:/home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt multinode-20220629191914-2408:/home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408.txt: (9.4838072s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt": (7.0223278s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408.txt"
E0629 19:30:17.121050    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408.txt": (6.9159107s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt multinode-20220629191914-2408-m02:/home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 cp multinode-20220629191914-2408-m03:/home/docker/cp-test.txt multinode-20220629191914-2408-m02:/home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408-m02.txt: (9.5358818s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m03 "sudo cat /home/docker/cp-test.txt": (7.0060907s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 ssh -n multinode-20220629191914-2408-m02 "sudo cat /home/docker/cp-test_multinode-20220629191914-2408-m03_multinode-20220629191914-2408-m02.txt": (6.9667073s)
--- PASS: TestMultiNode/serial/CopyFile (240.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (31.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 node stop m03: (8.124408s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status: exit status 7 (11.7747609s)

                                                
                                                
-- stdout --
	multinode-20220629191914-2408
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220629191914-2408-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220629191914-2408-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr: exit status 7 (11.9296331s)

                                                
                                                
-- stdout --
	multinode-20220629191914-2408
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220629191914-2408-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220629191914-2408-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 19:31:02.197170    6988 out.go:296] Setting OutFile to fd 624 ...
	I0629 19:31:02.253341    6988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:31:02.253341    6988 out.go:309] Setting ErrFile to fd 684...
	I0629 19:31:02.253341    6988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:31:02.263552    6988 out.go:303] Setting JSON to false
	I0629 19:31:02.263552    6988 mustload.go:65] Loading cluster: multinode-20220629191914-2408
	I0629 19:31:02.264288    6988 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:31:02.264288    6988 status.go:253] checking status of multinode-20220629191914-2408 ...
	I0629 19:31:02.275580    6988 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:31:05.374999    6988 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (3.0992886s)
	I0629 19:31:05.375187    6988 status.go:328] multinode-20220629191914-2408 host status = "Running" (err=<nil>)
	I0629 19:31:05.375284    6988 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:31:05.382686    6988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408
	I0629 19:31:06.469253    6988 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408: (1.0865603s)
	I0629 19:31:06.469453    6988 host.go:66] Checking if "multinode-20220629191914-2408" exists ...
	I0629 19:31:06.482596    6988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 19:31:06.489597    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:31:07.607208    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1176031s)
	I0629 19:31:07.607719    6988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54457 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408\id_rsa Username:docker}
	I0629 19:31:07.793586    6988 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.310981s)
	I0629 19:31:07.804563    6988 ssh_runner.go:195] Run: systemctl --version
	I0629 19:31:07.828879    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 19:31:07.868180    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408
	I0629 19:31:09.001984    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629191914-2408: (1.1337963s)
	I0629 19:31:09.003335    6988 kubeconfig.go:92] found "multinode-20220629191914-2408" server: "https://127.0.0.1:54456"
	I0629 19:31:09.003335    6988 api_server.go:165] Checking apiserver status ...
	I0629 19:31:09.013195    6988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 19:31:09.064644    6988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1914/cgroup
	I0629 19:31:09.091632    6988 api_server.go:181] apiserver freezer: "20:freezer:/docker/b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0/kubepods/burstable/pod9c7eac304a910f4e89eb5c9093788bc9/72903587275b3abb44bc547a882d87db982a8e3ec8f9902f702cd43e568bf983"
	I0629 19:31:09.105092    6988 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b554e1949a1a761a404841f84819c741361a1ad95bc3d11656316abbc644b4e0/kubepods/burstable/pod9c7eac304a910f4e89eb5c9093788bc9/72903587275b3abb44bc547a882d87db982a8e3ec8f9902f702cd43e568bf983/freezer.state
	I0629 19:31:09.132617    6988 api_server.go:203] freezer state: "THAWED"
	I0629 19:31:09.132704    6988 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54456/healthz ...
	I0629 19:31:09.151300    6988 api_server.go:266] https://127.0.0.1:54456/healthz returned 200:
	ok
	I0629 19:31:09.151300    6988 status.go:419] multinode-20220629191914-2408 apiserver status = Running (err=<nil>)
	I0629 19:31:09.151300    6988 status.go:255] multinode-20220629191914-2408 status: &{Name:multinode-20220629191914-2408 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0629 19:31:09.151300    6988 status.go:253] checking status of multinode-20220629191914-2408-m02 ...
	I0629 19:31:09.166011    6988 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}
	I0629 19:31:10.282634    6988 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}: (1.1165566s)
	I0629 19:31:10.282667    6988 status.go:328] multinode-20220629191914-2408-m02 host status = "Running" (err=<nil>)
	I0629 19:31:10.282713    6988 host.go:66] Checking if "multinode-20220629191914-2408-m02" exists ...
	I0629 19:31:10.292364    6988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02
	I0629 19:31:11.416065    6988 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629191914-2408-m02: (1.1231663s)
	I0629 19:31:11.416065    6988 host.go:66] Checking if "multinode-20220629191914-2408-m02" exists ...
	I0629 19:31:11.425038    6988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 19:31:11.432041    6988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02
	I0629 19:31:12.543772    6988 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629191914-2408-m02: (1.1115079s)
	I0629 19:31:12.543772    6988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54527 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220629191914-2408-m02\id_rsa Username:docker}
	I0629 19:31:12.690901    6988 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2658543s)
	I0629 19:31:12.702669    6988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 19:31:12.733550    6988 status.go:255] multinode-20220629191914-2408-m02 status: &{Name:multinode-20220629191914-2408-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0629 19:31:12.733550    6988 status.go:253] checking status of multinode-20220629191914-2408-m03 ...
	I0629 19:31:12.748145    6988 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408-m03 --format={{.State.Status}}
	I0629 19:31:13.869110    6988 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408-m03 --format={{.State.Status}}: (1.1209578s)
	I0629 19:31:13.869110    6988 status.go:328] multinode-20220629191914-2408-m03 host status = "Stopped" (err=<nil>)
	I0629 19:31:13.869110    6988 status.go:341] host is not running, skipping remaining checks
	I0629 19:31:13.869110    6988 status.go:255] multinode-20220629191914-2408-m03 status: &{Name:multinode-20220629191914-2408-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (31.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.1700764s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 node start m03 --alsologtostderr
E0629 19:31:54.785348    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 node start m03 --alsologtostderr: (41.7560878s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status: (14.539921s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (57.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (43.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 node delete m03: (31.0178355s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr: (10.7536768s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:412: (dbg) Done: docker volume ls: (1.0736956s)
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (43.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (42.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 stop
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 stop: (33.60073s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status: exit status 7 (4.5560701s)

                                                
                                                
-- stdout --
	multinode-20220629191914-2408
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220629191914-2408-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr: exit status 7 (4.557622s)

                                                
                                                
-- stdout --
	multinode-20220629191914-2408
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220629191914-2408-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 19:38:46.806171    7900 out.go:296] Setting OutFile to fd 996 ...
	I0629 19:38:46.862275    7900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:38:46.862275    7900 out.go:309] Setting ErrFile to fd 896...
	I0629 19:38:46.862275    7900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 19:38:46.871556    7900 out.go:303] Setting JSON to false
	I0629 19:38:46.871556    7900 mustload.go:65] Loading cluster: multinode-20220629191914-2408
	I0629 19:38:46.872383    7900 config.go:178] Loaded profile config "multinode-20220629191914-2408": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 19:38:46.872383    7900 status.go:253] checking status of multinode-20220629191914-2408 ...
	I0629 19:38:46.886180    7900 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}
	I0629 19:38:49.982611    7900 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408 --format={{.State.Status}}: (3.0964107s)
	I0629 19:38:49.982611    7900 status.go:328] multinode-20220629191914-2408 host status = "Stopped" (err=<nil>)
	I0629 19:38:49.982611    7900 status.go:341] host is not running, skipping remaining checks
	I0629 19:38:49.982611    7900 status.go:255] multinode-20220629191914-2408 status: &{Name:multinode-20220629191914-2408 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0629 19:38:49.982611    7900 status.go:253] checking status of multinode-20220629191914-2408-m02 ...
	I0629 19:38:49.996583    7900 cli_runner.go:164] Run: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}
	I0629 19:38:51.096959    7900 cli_runner.go:217] Completed: docker container inspect multinode-20220629191914-2408-m02 --format={{.State.Status}}: (1.1002138s)
	I0629 19:38:51.096986    7900 status.go:328] multinode-20220629191914-2408-m02 host status = "Stopped" (err=<nil>)
	I0629 19:38:51.096986    7900 status.go:341] host is not running, skipping remaining checks
	I0629 19:38:51.096986    7900 status.go:255] multinode-20220629191914-2408-m02 status: &{Name:multinode-20220629191914-2408-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (42.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (121.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.1766085s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408 --wait=true -v=8 --alsologtostderr --driver=docker
E0629 19:38:54.263116    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 19:40:17.134805    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408 --wait=true -v=8 --alsologtostderr --driver=docker: (1m49.1889857s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220629191914-2408 status --alsologtostderr: (10.9751286s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (121.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (148.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220629191914-2408
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408-m02 --driver=docker: exit status 14 (387.833ms)

                                                
                                                
-- stdout --
	* [multinode-20220629191914-2408-m02] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220629191914-2408-m02' is duplicated with machine name 'multinode-20220629191914-2408-m02' in profile 'multinode-20220629191914-2408'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408-m03 --driver=docker
E0629 19:41:54.785959    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220629191914-2408-m03 --driver=docker: (2m0.2160072s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220629191914-2408
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220629191914-2408: exit status 80 (6.7775933s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220629191914-2408
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220629191914-2408-m03 already exists in multinode-20220629191914-2408-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_node_faf4be2af32ab6d64b40fb15c6239eaae2a98ae3_54.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220629191914-2408-m03
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220629191914-2408-m03: (20.9998972s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (148.73s)

                                                
                                    
x
+
TestPreload (340.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220629194419-2408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0629 19:45:17.133030    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:46:54.795823    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
preload_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220629194419-2408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (2m56.7522854s)
preload_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220629194419-2408 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220629194419-2408 -- docker pull gcr.io/k8s-minikube/busybox: (8.4136331s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220629194419-2408 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0629 19:48:54.271049    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220629194419-2408 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m3.6544235s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220629194419-2408 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220629194419-2408 -- docker images: (7.2634742s)
helpers_test.go:175: Cleaning up "test-preload-20220629194419-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220629194419-2408
E0629 19:50:00.317320    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220629194419-2408: (24.751712s)
--- PASS: TestPreload (340.84s)

                                                
                                    
x
+
TestScheduledStopWindows (231.69s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220629195000-2408 --memory=2048 --driver=docker
E0629 19:50:17.136079    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 19:51:37.985497    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 19:51:54.786091    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20220629195000-2408 --memory=2048 --driver=docker: (1m57.314578s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220629195000-2408 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220629195000-2408 --schedule 5m: (6.6293119s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220629195000-2408 -n scheduled-stop-20220629195000-2408
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220629195000-2408 -n scheduled-stop-20220629195000-2408: (7.3395418s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220629195000-2408 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220629195000-2408 -- sudo systemctl show minikube-scheduled-stop --no-page: (7.070725s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220629195000-2408 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220629195000-2408 --schedule 5s: (5.4200868s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20220629195000-2408
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20220629195000-2408: exit status 7 (3.4672973s)

                                                
                                                
-- stdout --
	scheduled-stop-20220629195000-2408
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220629195000-2408 -n scheduled-stop-20220629195000-2408
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220629195000-2408 -n scheduled-stop-20220629195000-2408: exit status 7 (3.4413478s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220629195000-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220629195000-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220629195000-2408: (20.991958s)
--- PASS: TestScheduledStopWindows (231.69s)

                                                
                                    
x
+
TestInsufficientStorage (113.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220629195352-2408 --memory=2048 --output=json --wait=true --driver=docker
E0629 19:53:54.274578    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220629195352-2408 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (1m19.1282218s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e47b79e3-5eb6-47ba-92bc-294e01082c52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220629195352-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2033410e-262a-404e-ae54-01c1f252b3c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube8\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"d8352308-b92c-4dd1-ab3f-2f8d388973a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"8c75016f-0a80-4679-b982-f84a2cb8e7f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14420"}}
	{"specversion":"1.0","id":"5166c4e6-9085-497a-b779-aa2f6b1c2b95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"46960631-a4d3-48c2-b0f8-d2b5e4b9ddb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c5add717-2249-4e24-ae4d-f42f9a413394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"88556d98-a3ce-4c55-a80b-a9beb7d64e03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3166a142-6347-45a9-9aa0-6ec6846030af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"53950f1d-a1bf-4a74-a2c8-b3df0bfd8d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220629195352-2408 in cluster insufficient-storage-20220629195352-2408","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"06e769ee-3da5-4871-81c8-fbd4fb4e1374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7c32d07-64ae-4230-9019-d40b8ba5ad47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"29d80486-ce8b-488b-850e-1314abce781c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220629195352-2408 --output=json --layout=cluster
E0629 19:55:17.138996    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220629195352-2408 --output=json --layout=cluster: exit status 7 (6.9846401s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220629195352-2408","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220629195352-2408","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 19:55:18.455958    6552 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220629195352-2408" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220629195352-2408 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220629195352-2408 --output=json --layout=cluster: exit status 7 (7.0581021s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220629195352-2408","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220629195352-2408","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 19:55:25.512215    6716 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220629195352-2408" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	E0629 19:55:25.553117    6716 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\insufficient-storage-20220629195352-2408\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220629195352-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220629195352-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220629195352-2408: (20.3981192s)
--- PASS: TestInsufficientStorage (113.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (428.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.1814952491.exe start -p running-upgrade-20220629200114-2408 --memory=2200 --vm-driver=docker
E0629 20:01:54.794879    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.1814952491.exe start -p running-upgrade-20220629200114-2408 --memory=2200 --vm-driver=docker: (4m24.4738312s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20220629200114-2408 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20220629200114-2408 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m20.7114007s)
helpers_test.go:175: Cleaning up "running-upgrade-20220629200114-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220629200114-2408
E0629 20:08:17.999651    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220629200114-2408: (22.2202037s)
--- PASS: TestRunningBinaryUpgrade (428.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (474.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (2m49.1946617s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220629195914-2408
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220629195914-2408: (10.9657216s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220629195914-2408 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220629195914-2408 status --format={{.Host}}: exit status 7 (3.8905753s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker
E0629 20:03:54.268071    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker: (2m24.4890455s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220629195914-2408 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (3.6435945s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220629195914-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.24.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220629195914-2408
	    minikube start -p kubernetes-upgrade-20220629195914-2408 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220629195914-24082 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.24.2, by running:
	    
	    minikube start -p kubernetes-upgrade-20220629195914-2408 --kubernetes-version=v1.24.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220629195914-2408 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker: (1m54.4784631s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220629195914-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220629195914-2408

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220629195914-2408: (27.7980178s)
--- PASS: TestKubernetesUpgrade (474.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (596.77s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.1.1798783014.exe start -p missing-upgrade-20220629195912-2408 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.1.1798783014.exe start -p missing-upgrade-20220629195912-2408 --memory=2200 --driver=docker: (6m10.6878721s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220629195912-2408
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220629195912-2408: (12.5204527s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220629195912-2408
version_upgrade_test.go:330: (dbg) Done: docker rm missing-upgrade-20220629195912-2408: (1.314945s)
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20220629195912-2408 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20220629195912-2408 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m5.2305554s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220629195912-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220629195912-2408
E0629 20:08:54.278730    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220629195912-2408: (26.2260258s)
--- PASS: TestMissingContainerUpgrade (596.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (472.7483ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220629195545-2408] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (174.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --driver=docker
E0629 19:56:54.800970    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --driver=docker: (2m44.3520522s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220629195545-2408 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-20220629195545-2408 status -o json: (10.2660998s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (174.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (73.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220629195545-2408 --no-kubernetes --driver=docker: (44.1344358s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220629195545-2408 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-20220629195545-2408 status -o json: exit status 2 (8.5906616s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220629195545-2408","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-20220629195545-2408
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-20220629195545-2408: (20.8376526s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (73.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (430.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3942688751.exe start -p stopped-upgrade-20220629195923-2408 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3942688751.exe start -p stopped-upgrade-20220629195923-2408 --memory=2200 --vm-driver=docker: (5m24.9341668s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3942688751.exe -p stopped-upgrade-20220629195923-2408 stop
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3942688751.exe -p stopped-upgrade-20220629195923-2408 stop: (24.6337714s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20220629195923-2408 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0629 20:05:17.139623    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20220629195923-2408 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m20.8295495s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (430.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (12.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220629195923-2408
E0629 20:06:40.337134    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220629195923-2408: (12.1908528s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (12.19s)

                                                
                                    
x
+
TestPause/serial/Start (168.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220629200709-2408 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220629200709-2408 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m48.232105s)
--- PASS: TestPause/serial/Start (168.23s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (71.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220629200709-2408 --alsologtostderr -v=1 --driver=docker
E0629 20:10:17.142924    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220629200709-2408 --alsologtostderr -v=1 --driver=docker: (1m11.5903832s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (71.61s)

                                                
                                    
x
+
TestPause/serial/Pause (7.33s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220629200709-2408 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220629200709-2408 --alsologtostderr -v=5: (7.3345657s)
--- PASS: TestPause/serial/Pause (7.33s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (7.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20220629200709-2408 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20220629200709-2408 --output=json --layout=cluster: exit status 2 (7.6981009s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220629200709-2408","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220629200709-2408","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (7.70s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.07s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20220629200709-2408 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Unpause
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20220629200709-2408 --alsologtostderr -v=5: (7.0658782s)
--- PASS: TestPause/serial/Unpause (7.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (212.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220629201126-2408 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220629201126-2408 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (3m32.2773897s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (212.28s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220629200709-2408 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220629200709-2408 --alsologtostderr -v=5: (7.7187277s)
--- PASS: TestPause/serial/PauseAgain (7.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (25.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20220629200709-2408 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-20220629200709-2408 --alsologtostderr -v=5: (25.2721204s)
--- PASS: TestPause/serial/DeletePaused (25.27s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (29.93s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (26.4615939s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:168: (dbg) Done: docker ps -a: (1.104499s)
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220629200709-2408
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220629200709-2408: exit status 1 (1.1346197s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220629200709-2408

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
pause_test.go:178: (dbg) Done: docker network ls: (1.1887065s)
--- PASS: TestPause/serial/VerifyDeletedResources (29.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (199.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220629201225-2408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220629201225-2408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.2: (3m19.5827794s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (199.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (189.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220629201242-2408 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220629201242-2408 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.24.2: (3m9.7564092s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (189.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (161.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220629201430-2408 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220629201430-2408 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.24.2: (2m41.0816642s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (161.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (16.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220629201126-2408 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [0b7ba0f3-9d77-40ef-9368-2130fd3ec75f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [0b7ba0f3-9d77-40ef-9368-2130fd3ec75f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.145898s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220629201126-2408 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:196: (dbg) Done: kubectl --context old-k8s-version-20220629201126-2408 exec busybox -- /bin/sh -c "ulimit -n": (3.813587s)
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (16.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220629201126-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0629 20:15:17.142854    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220629201126-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.960334s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220629201126-2408 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (7.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220629201126-2408 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220629201126-2408 --alsologtostderr -v=3: (20.4308402s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408: exit status 7 (4.1100307s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220629201126-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220629201126-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (4.3764401s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (8.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220629201225-2408 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context no-preload-20220629201225-2408 create -f testdata\busybox.yaml: (1.1271944s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [87667dbf-3b7d-4962-9488-ca1a41293422] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:342: "busybox" [87667dbf-3b7d-4962-9488-ca1a41293422] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:342: "busybox" [87667dbf-3b7d-4962-9488-ca1a41293422] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.0595903s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220629201225-2408 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (480.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220629201126-2408 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220629201126-2408 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m51.4702656s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408: (8.8702625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (480.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220629201242-2408 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [575aa805-cd1a-409e-941d-0d18fb92a621] Pending
helpers_test.go:342: "busybox" [575aa805-cd1a-409e-941d-0d18fb92a621] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:342: "busybox" [575aa805-cd1a-409e-941d-0d18fb92a621] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0932246s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220629201242-2408 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220629201225-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220629201225-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (7.4324477s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220629201225-2408 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (8.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220629201242-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220629201242-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (8.2213222s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220629201242-2408 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (8.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (22.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220629201225-2408 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20220629201225-2408 --alsologtostderr -v=3: (22.0401209s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (22.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (21.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220629201242-2408 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20220629201242-2408 --alsologtostderr -v=3: (21.3545613s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (21.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (8.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408: exit status 7 (4.0494703s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220629201225-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220629201225-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (4.0129563s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (8.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (7.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408: exit status 7 (3.8846694s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220629201242-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220629201242-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.9506261s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (7.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (408.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220629201225-2408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220629201225-2408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.2: (6m38.0796667s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408
E0629 20:23:20.346991    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408: (10.4589857s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (408.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (392.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220629201242-2408 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.24.2
E0629 20:16:54.807972    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 20:16:57.488788    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220629201242-2408 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.24.2: (6m21.0691498s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408: (11.5933839s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (392.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220629201430-2408 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c0df8943-e7b9-400f-9c9e-d569c6812393] Pending
helpers_test.go:342: "busybox" [c0df8943-e7b9-400f-9c9e-d569c6812393] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [c0df8943-e7b9-400f-9c9e-d569c6812393] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 11.1928833s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220629201430-2408 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220629201430-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220629201430-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.9589438s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220629201430-2408 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (7.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (21.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220629201430-2408 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220629201430-2408 --alsologtostderr -v=3: (21.4352678s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (21.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (7.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408: exit status 7 (3.9309081s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220629201430-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220629201430-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.7223933s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (7.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (392.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220629201430-2408 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.24.2
E0629 20:18:54.278252    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 20:20:17.150581    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 20:21:54.805601    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220629201430-2408 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.24.2: (6m22.1909035s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220629201430-2408 -n default-k8s-different-port-20220629201430-2408: (10.1246281s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (392.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-sbhzs" [5a6142c8-123f-4de5-9738-6c918d8bcd14] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-sbhzs" [5a6142c8-123f-4de5-9738-6c918d8bcd14] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.0929111s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (43.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-zkz72" [9fb16796-a541-44e3-936f-74c932b7d47c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-zkz72" [9fb16796-a541-44e3-936f-74c932b7d47c] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 43.1032092s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (43.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-sbhzs" [5a6142c8-123f-4de5-9738-6c918d8bcd14] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0849997s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220629201242-2408 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220629201242-2408 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20220629201242-2408 "sudo crictl images -o json": (9.5101838s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (52.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220629201242-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-20220629201242-2408 --alsologtostderr -v=1: (8.1371778s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408: exit status 2 (8.0562131s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408: exit status 2 (9.0795514s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-20220629201242-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-20220629201242-2408 --alsologtostderr -v=1: (9.1932315s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408: (9.5696682s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220629201242-2408 -n embed-certs-20220629201242-2408: (8.3906536s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (52.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-4tbsp" [19b467bb-5f68-4e4f-820e-c9556cf9579d] Running
E0629 20:23:54.287600    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0495707s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-4tbsp" [19b467bb-5f68-4e4f-820e-c9556cf9579d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0262281s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220629201126-2408 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (9.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220629201126-2408 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220629201126-2408 "sudo crictl images -o json": (9.1223751s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (9.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-zkz72" [9fb16796-a541-44e3-936f-74c932b7d47c] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0348613s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220629201225-2408 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (58.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220629201126-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220629201126-2408 --alsologtostderr -v=1: (8.1902936s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408: exit status 2 (9.951403s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408: exit status 2 (8.5504583s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-20220629201126-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-20220629201126-2408 --alsologtostderr -v=1: (8.8407078s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408: (8.9910809s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408
E0629 20:24:58.014487    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220629201126-2408 -n old-k8s-version-20220629201126-2408: (14.413135s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (58.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220629201225-2408 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20220629201225-2408 "sudo crictl images -o json": (9.3293398s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (63.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220629201225-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20220629201225-2408 --alsologtostderr -v=1: (8.7970561s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408: exit status 2 (8.4447664s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408: exit status 2 (9.4640747s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20220629201225-2408 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20220629201225-2408 --alsologtostderr -v=1: (20.2105129s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408: (8.3745711s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220629201225-2408 -n no-preload-20220629201225-2408: (8.0527543s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (63.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (45.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-jlpln" [53a0e3b0-811f-4a0b-b16e-f091b49c5166] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-jlpln" [53a0e3b0-811f-4a0b-b16e-f091b49c5166] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 45.180605s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (45.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-jlpln" [53a0e3b0-811f-4a0b-b16e-f091b49c5166] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0651001s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220629201430-2408 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (172.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220629202523-2408 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220629202523-2408 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.24.2: (2m52.419759s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (172.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220629201430-2408 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220629201430-2408 "sudo crictl images -o json": (7.6577929s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (182.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (3m2.5523294s)
--- PASS: TestNetworkPlugins/group/auto/Start (182.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (192.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220629200924-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-20220629200924-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: (3m12.4403182s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (192.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (7.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220629202523-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220629202523-2408 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (7.7503332s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (7.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220629202523-2408 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20220629202523-2408 --alsologtostderr -v=3: (20.7571863s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408: exit status 7 (4.1391887s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220629202523-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220629202523-2408 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (4.350727s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (8.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (96.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220629202523-2408 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.24.2
E0629 20:28:54.281769    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220629202523-2408 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.24.2: (1m27.1086242s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408
E0629 20:30:19.973490    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220629202523-2408 -n newest-cni-20220629202523-2408: (9.8390183s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (96.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (7.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-20220629200908-2408 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-20220629200908-2408 "pgrep -a kubelet": (7.4723728s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (7.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220629200908-2408 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-n22xx" [19b59849-bd2a-4fca-8aed-4cae1867e3c8] Pending
helpers_test.go:342: "netcat-869c55b6dc-n22xx" [19b59849-bd2a-4fca-8aed-4cae1867e3c8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-n22xx" [19b59849-bd2a-4fca-8aed-4cae1867e3c8] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 21.0373667s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (22.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-mcdmn" [761d3d72-7471-4c2d-9e50-0ca5dc228fd6] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0442389s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (7.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-20220629200924-2408 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-20220629200924-2408 "pgrep -a kubelet": (7.8983079s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (7.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220629200908-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220629200908-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220629200908-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5005582s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (26.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220629200924-2408 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-nb6gl" [00948f81-80e4-404c-8f5a-f921cb6ea022] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-nb6gl" [00948f81-80e4-404c-8f5a-f921cb6ea022] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 25.1158815s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (26.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220629200924-2408 exec deployment/netcat -- nslookup kubernetes.default
E0629 20:29:59.407434    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:29:59.423396    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:29:59.439261    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:29:59.469952    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:29:59.515617    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:29:59.597133    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:29:59.764295    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220629200924-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0629 20:30:00.092844    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (1.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220629200924-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0629 20:30:00.743160    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
net_test.go:238: (dbg) Done: kubectl --context kindnet-20220629200924-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": (1.1005105s)
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (1.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (438.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220629200924-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p false-20220629200924-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (7m18.00473s)
--- PASS: TestNetworkPlugins/group/false/Start (438.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220629202523-2408 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20220629202523-2408 "sudo crictl images -o json": (9.4671254s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (9.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (419.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E0629 20:32:53.885369    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.
E0629 20:33:30.597423    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:33:34.851092    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.
E0629 20:33:37.498894    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 20:33:54.279958    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.078865    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.094187    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.110160    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.141888    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.187519    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.268945    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.431729    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:03.760885    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:04.415883    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:05.698238    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:08.285337    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:13.415674    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:19.860186    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:19.887870    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:19.908259    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:19.940650    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:19.991051    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:20.075237    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:20.258165    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:20.584835    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:21.240308    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:22.526470    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:23.662448    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:25.087359    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:30.211894    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:40.462197    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:34:44.155790    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:34:56.779975    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.
E0629 20:34:59.409042    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:35:00.956063    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:35:17.155884    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 20:35:25.125716    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:35:27.192431    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:35:41.931131    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:35:46.637731    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:36:14.448942    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220629201225-2408\client.crt: The system cannot find the path specified.
E0629 20:36:47.055550    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.
E0629 20:36:54.806839    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.
E0629 20:37:03.859988    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220629200924-2408\client.crt: The system cannot find the path specified.
E0629 20:37:12.788316    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.
E0629 20:37:40.631322    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (6m59.3214565s)
--- PASS: TestNetworkPlugins/group/bridge/Start (419.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (8.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-20220629200924-2408 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-20220629200924-2408 "pgrep -a kubelet": (8.0132759s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (8.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (20.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220629200924-2408 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-srt97" [f5f25df1-51e0-44eb-8e72-de65cc559008] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-srt97" [f5f25df1-51e0-44eb-8e72-de65cc559008] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 20.0307378s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (20.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (149.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
E0629 20:39:03.090504    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220629200908-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (2m29.5690618s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (149.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (8.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-20220629200908-2408 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-20220629200908-2408 "pgrep -a kubelet": (8.0243233s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (8.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (21.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220629200908-2408 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-jjmm5" [f438dcda-79f8-4fbd-9371-64fe8ce94b8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0629 20:39:59.406829    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220629201126-2408\client.crt: The system cannot find the path specified.
E0629 20:40:00.365924    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-jjmm5" [f438dcda-79f8-4fbd-9371-64fe8ce94b8f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 20.0895972s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (21.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220629200908-2408 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220629200908-2408 "pgrep -a kubelet": (7.6763869s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (29.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220629200908-2408 replace --force -f testdata\netcat-deployment.yaml
E0629 20:41:38.031178    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220629185334-2408\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-j8jfm" [c51111fe-2945-44f7-8985-3449820256fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-j8jfm" [c51111fe-2945-44f7-8985-3449820256fb] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 20.1091603s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (29.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (406.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-20220629200908-2408 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (6m46.7914921s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (406.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629200908-2408 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220629200908-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220629200908-2408 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (7.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-20220629200908-2408 "pgrep -a kubelet"
E0629 20:48:36.007627    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-different-port-20220629201430-2408\client.crt: The system cannot find the path specified.
E0629 20:48:36.244013    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220629200924-2408\client.crt: The system cannot find the path specified.
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-20220629200908-2408 "pgrep -a kubelet": (7.2625264s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (7.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (20.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220629200908-2408 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-prbh4" [880abce6-fd4b-43a9-b574-7058bf8fd14e] Pending
helpers_test.go:342: "netcat-869c55b6dc-prbh4" [880abce6-fd4b-43a9-b574-7058bf8fd14e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-prbh4" [880abce6-fd4b-43a9-b574-7058bf8fd14e] Running
E0629 20:48:54.298429    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220629181245-2408\client.crt: The system cannot find the path specified.
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 19.0286785s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (20.15s)

                                                
                                    

Test skip (25/270)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (37.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 22.0152ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-wkv6w" [1b86ca04-3794-4bcd-971a-f24e18de7f32] Running
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0289984s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-jk6sx" [52454c38-655e-476d-b162-e744195a1a9c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0493256s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220629175812-2408 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220629175812-2408 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220629175812-2408 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (27.2958312s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (37.83s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (43.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220629175812-2408 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220629175812-2408 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:184: (dbg) Done: kubectl --context addons-20220629175812-2408 replace --force -f testdata\nginx-ingress-v1.yaml: (5.3983305s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220629175812-2408 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-20220629175812-2408 replace --force -f testdata\nginx-pod-svc.yaml: (1.693919s)
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [a7653fef-12d2-409f-9897-2e514dbd42ee] Pending
helpers_test.go:342: "nginx" [a7653fef-12d2-409f-9897-2e514dbd42ee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [a7653fef-12d2-409f-9897-2e514dbd42ee] Running
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 28.182582s
addons_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220629175812-2408 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:214: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220629175812-2408 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (7.7168003s)
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (43.41s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220629181245-2408 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220629181245-2408 --alsologtostderr -v=1] ...
helpers_test.go:500: unable to terminate pid 2224: Access is denied.
E0629 18:26:40.249815    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:30:17.104416    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:35:17.100998    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:40:17.114798    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:43:20.269526    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:45:17.117558    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
E0629 18:50:17.118727    2408 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220629175812-2408\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220629181245-2408 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220629181245-2408 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-m2pgx" [0e278814-a648-4c5f-9a49-a2ea82ead979] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-m2pgx" [0e278814-a648-4c5f-9a49-a2ea82ead979] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 34.2621793s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (35.36s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (45.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:164: (dbg) Run:  kubectl --context ingress-addon-legacy-20220629185334-2408 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:164: (dbg) Done: kubectl --context ingress-addon-legacy-20220629185334-2408 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.2609353s)
addons_test.go:184: (dbg) Run:  kubectl --context ingress-addon-legacy-20220629185334-2408 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:184: (dbg) Done: kubectl --context ingress-addon-legacy-20220629185334-2408 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.1341329s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-20220629185334-2408 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:197: (dbg) Done: kubectl --context ingress-addon-legacy-20220629185334-2408 replace --force -f testdata\nginx-pod-svc.yaml: (1.1821832s)
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [4f9701a7-8b5a-4acd-b3d8-c48d50042eeb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [4f9701a7-8b5a-4acd-b3d8-c48d50042eeb] Running
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 24.187135s
addons_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220629185334-2408 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:214: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220629185334-2408 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.5192873s)
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (45.48s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (8.72s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220629201422-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220629201422-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220629201422-2408: (8.7230692s)
--- SKIP: TestStartStop/group/disable-driver-mounts (8.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (15.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220629200908-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220629200908-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220629200908-2408: (15.7732592s)
--- SKIP: TestNetworkPlugins/group/flannel (15.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (9.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220629200924-2408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-20220629200924-2408
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-20220629200924-2408: (9.0514405s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (9.05s)

                                                
                                    
Copied to clipboard