Test Report: Docker_Windows 14956

                    
                      b64f5160c8f6e7e7cba4bdc5b90d9175513ec57f:2022-10-25:26256
                    
                

Test fail (11/265)

x
+
TestFunctional/parallel/ServiceCmd (2125.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-000838 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-000838 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-dbp6x" [e0ee0ba7-cb31-4d8a-8c8b-14f9922b5fe2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-dbp6x" [e0ee0ba7-cb31-4d8a-8c8b-14f9922b5fe2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 44.1666819s
functional_test.go:1449: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 service list: (1.6374202s)
functional_test.go:1463: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-000838 service --namespace=default --https --url hello-node: exit status 1 (34m30.0768987s)

                                                
                                                
-- stdout --
	https://127.0.0.1:62685

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-000838 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run:  kubectl --context functional-000838 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name:         hello-node-5fcdfb5cc4-dbp6x
Namespace:    default
Priority:     0
Node:         functional-000838/192.168.49.2
Start Time:   Tue, 25 Oct 2022 00:13:22 +0000
Labels:       app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID:   docker://214cc39686ef2ed7eacf7b7e518d301be043c3993ae75d19b63f4f1352ff537d
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Tue, 25 Oct 2022 00:13:58 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fp9b (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-7fp9b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                        Message
----    ------     ----       ----                        -------
Normal  Scheduled  <unknown>                              Successfully assigned default/hello-node-5fcdfb5cc4-dbp6x to functional-000838
Normal  Pulling    35m        kubelet, functional-000838  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     34m        kubelet, functional-000838  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 30.5058703s
Normal  Created    34m        kubelet, functional-000838  Created container echoserver
Normal  Started    34m        kubelet, functional-000838  Started container echoserver

                                                
                                                
Name:         hello-node-connect-6458c8fb6f-dqbjm
Namespace:    default
Priority:     0
Node:         functional-000838/192.168.49.2
Start Time:   Tue, 25 Oct 2022 00:13:17 +0000
Labels:       app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
IP:           172.17.0.5
Controlled By:  ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID:   docker://d97f025bc5b7ea69a824106ae128acf751e7d0a24305ff39accbe3b6b5b903f9
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Tue, 25 Oct 2022 00:13:57 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l8smv (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-l8smv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                        Message
----    ------     ----       ----                        -------
Normal  Scheduled  <unknown>                              Successfully assigned default/hello-node-connect-6458c8fb6f-dqbjm to functional-000838
Normal  Pulling    35m        kubelet, functional-000838  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     34m        kubelet, functional-000838  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 35.766734s
Normal  Created    34m        kubelet, functional-000838  Created container echoserver
Normal  Started    34m        kubelet, functional-000838  Started container echoserver

                                                
                                                
functional_test.go:1412: (dbg) Run:  kubectl --context functional-000838 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run:  kubectl --context functional-000838 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.99.199.53
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32309/TCP
Endpoints:                172.17.0.6:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-000838
helpers_test.go:235: (dbg) docker inspect functional-000838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d",
	        "Created": "2022-10-25T00:09:17.2468862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 24910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-25T00:09:18.1713589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/hosts",
	        "LogPath": "/var/lib/docker/containers/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d/1a85eb831e6835b6e74e9f72a39c023064bb125f66d9c1b408ef5abe0055952d-json.log",
	        "Name": "/functional-000838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-000838:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-000838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7-init/diff:/var/lib/docker/overlay2/1d72d69c076943d6cd413bc50b6a474779145c6396136b4aef1829c16f4a6d69/diff:/var/lib/docker/overlay2/2712457ef6b3ec08714d64e5261a9b327c3f8db2156d7a1b493340af804c46f1/diff:/var/lib/docker/overlay2/956ad2e584ed04429b79ab0ee4bdc8977af3fcfbab3cc0ed570922cc07ffd0a6/diff:/var/lib/docker/overlay2/c4f80c5076f71429b4266dc613d1850e7295faded99f05e04fcb13d2cb4d3157/diff:/var/lib/docker/overlay2/18b12a09b44604345877d4490348801b993263f747090a3a48eac835ac323d86/diff:/var/lib/docker/overlay2/6ce1e052ac8d5221cb1978a93a4c4d18c74da80e998b6e54246cdc95997a769f/diff:/var/lib/docker/overlay2/9e6e7c177b550c9c4fc4af8222ccc9bfe5b01fa177f08388c541fde750e4df80/diff:/var/lib/docker/overlay2/c56ad1fbd8fd09ba635cb91b82c303fab8be925f82edac48c47ed2b99f054b36/diff:/var/lib/docker/overlay2/b4a229acad56b83bd9d04813f3f4cf0c8c562169b12ef1e88243f4588d0b28f9/diff:/var/lib/docker/overlay2/56f30b
af9b74a7e6afda16e0f90a1863a3db06b5fec5cf06828152edc0faa420/diff:/var/lib/docker/overlay2/4275e6a6be34231198b756601a3b51a1d8446e8830b1c4037b20370047b88b9e/diff:/var/lib/docker/overlay2/0a9f47913b546daa2d558a978beaaa9e1e7e73a568fa1ee9d198e1e2154d3f75/diff:/var/lib/docker/overlay2/f1895cfb690eaa9bf966dd3f040878344a80c0dc3606dd2d5e67d9495cfa3ff8/diff:/var/lib/docker/overlay2/84335bbaf957cb1942f1d774b817e78297dbe5ffeb7e2e406e7492cf5a720c7e/diff:/var/lib/docker/overlay2/d9a26e65c06347ae6f8f306617639febfee5427dffa6d33a6acb3abfc22092fb/diff:/var/lib/docker/overlay2/a6893072e83e913a455da1f55020a69e4cd75c9ca7b9893e47d184eaf0da806d/diff:/var/lib/docker/overlay2/2d4c8dbcc1a6e63159280d831a4e448df4587dae065b53837a0e735e579361c4/diff:/var/lib/docker/overlay2/6fd2d854ad2aede74411487bcfe2f1fa3c4e1bbfad739455a690a5801c7c9d18/diff:/var/lib/docker/overlay2/d8435d49436e1e6d94054688732a28cdf047031ca600d938ab879a3f72791749/diff:/var/lib/docker/overlay2/618bd9835cc6596945db86c2cd23a6ea6c60992ff42cb8ba7a13f96776d79bb3/diff:/var/lib/d
ocker/overlay2/8e9af4c331a1374dad5f203889fa4953cd3111c705011d2f885ce8a3a04daf2c/diff:/var/lib/docker/overlay2/b8b4d702f888aa572be928e4e449cfaed5da2a045d94f145c0d48b2f838a2dc5/diff:/var/lib/docker/overlay2/6b708706c388c674df30fea4b16deb3b96447089d2a1cd5341ef199bd5dc3c4e/diff:/var/lib/docker/overlay2/f3bab3644fefb2215fd7b4b857958be30f575fd080ec37030b8b970e46155cdc/diff:/var/lib/docker/overlay2/809d38d9cc75c39f4eab1c2c64257e010b66f6dd17717a251371701f51b07237/diff:/var/lib/docker/overlay2/b2fc12e35954dea9baf6e418bbc1b629a71863e855e4373e8d665590cd7cbc54/diff:/var/lib/docker/overlay2/34dcaea23605015741cd4c620ce445c935ca6a08892a5aa15165a8422bb013c0/diff:/var/lib/docker/overlay2/4c362976bdb9f18c68d5c294dc08d7939899992ed5f8bb13ab34f58ec03fcdd6/diff:/var/lib/docker/overlay2/316879c125d7c6ab5ddb970715d730f6a9ea41f2b58da1ac9379b1d528a25970/diff:/var/lib/docker/overlay2/241a6ea1a0e862f8ac9d51e14f03999907acd9030349143120fad52b3c1c2b97/diff:/var/lib/docker/overlay2/c64f861002875793ea9a7d58a0e0b96ad95c3c7fb2874b758d4fb1bc26c
34587/diff:/var/lib/docker/overlay2/9b91106560e299e000b1229f3c2774c8ff0b881dbb4a27b80b89d0287f2f581d/diff:/var/lib/docker/overlay2/48a0a6d3a2a4100e68d167121a7df5a2244821b71406e29d5cc8220307ed9847/diff:/var/lib/docker/overlay2/1f280e54c1637034501f87fed8ca123799984880082b190271d5fa183974cb70/diff:/var/lib/docker/overlay2/8b8d91bd6daf07b06612bec716b08ed3d8032a4caa291548eead78a2b2c7e037/diff:/var/lib/docker/overlay2/b3ab8284e9708da3d4a94f3bd549609f23fcc286b4c1522cdb244344a4957bba/diff:/var/lib/docker/overlay2/7cc92644ec11a70cec25faf398c533eaa555c3a0ab3e783bf6f0cb342f18de20/diff:/var/lib/docker/overlay2/7f44e48c3f9293e16b6fedacc411012e83674000293a110908fcbe7b8aa0f56c/diff:/var/lib/docker/overlay2/7ded7fd7dc10119d3c74efa565ab8580571328086d82d5e795e7adcd3276e653/diff:/var/lib/docker/overlay2/b4654f15c85f235a8a9d5b03067d9aacd8d02569b48170551e8cc1fb340698ad/diff:/var/lib/docker/overlay2/901a06d4c922f4dcb994eec1c950879f560844312e104093523c1f1637594c70/diff:/var/lib/docker/overlay2/0fdbbeb11fdbed96bd80868c62d4c13bf887e7
83043225667d2bde711d03b757/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd7ceb1116e13e025437a1ba4e3b40fd2a09d79f0a65ee8af31dddf699ce34e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-000838",
	                "Source": "/var/lib/docker/volumes/functional-000838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-000838",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-000838",
	                "name.minikube.sigs.k8s.io": "functional-000838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a6470ca592c3ac6b279c8a6362eddc515dd628db56486031c46ade398fae54d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62378"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62375"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62376"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62377"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1a6470ca592c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-000838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1a85eb831e68",
	                        "functional-000838"
	                    ],
	                    "NetworkID": "c737760c84e2aa542619af8def82aa71511352a4e3d9fc646e3fe13e39a09c29",
	                    "EndpointID": "7450f0f71fadf7af41fdb0f71e1702e3d9e51894c1199fe001ff87e41fdfcf84",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-000838 -n functional-000838
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-000838 -n functional-000838: (1.6464908s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 logs -n 25: (3.1699858s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image          | functional-000838 image ls                               | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	| ssh            | functional-000838 ssh sudo cat                           | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | /usr/share/ca-certificates/4200.pem                      |                   |                   |         |                     |                     |
	| service        | functional-000838 service                                | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT |                     |
	|                | --namespace=default --https                              |                   |                   |         |                     |                     |
	|                | --url hello-node                                         |                   |                   |         |                     |                     |
	| image          | functional-000838 image save --daemon                    | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-000838 |                   |                   |         |                     |                     |
	| ssh            | functional-000838 ssh sudo cat                           | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | /etc/ssl/certs/51391683.0                                |                   |                   |         |                     |                     |
	| docker-env     | functional-000838 docker-env                             | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	| ssh            | functional-000838 ssh sudo cat                           | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | /etc/ssl/certs/42002.pem                                 |                   |                   |         |                     |                     |
	| ssh            | functional-000838 ssh sudo cat                           | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | /usr/share/ca-certificates/42002.pem                     |                   |                   |         |                     |                     |
	| docker-env     | functional-000838 docker-env                             | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	| ssh            | functional-000838 ssh sudo cat                           | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | /etc/ssl/certs/3ec20f2e.0                                |                   |                   |         |                     |                     |
	| ssh            | functional-000838 ssh sudo cat                           | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | /etc/test/nested/copy/4200/hosts                         |                   |                   |         |                     |                     |
	| start          | -p functional-000838                                     | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT |                     |
	|                | --dry-run --memory                                       |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |                   |         |                     |                     |
	|                | --driver=docker                                          |                   |                   |         |                     |                     |
	| start          | -p functional-000838                                     | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT |                     |
	|                | --dry-run --memory                                       |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |                   |         |                     |                     |
	|                | --driver=docker                                          |                   |                   |         |                     |                     |
	| start          | -p functional-000838 --dry-run                           | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT |                     |
	|                | --alsologtostderr -v=1                                   |                   |                   |         |                     |                     |
	|                | --driver=docker                                          |                   |                   |         |                     |                     |
	| dashboard      | --url --port 36195                                       | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT |                     |
	|                | -p functional-000838                                     |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                   |                   |         |                     |                     |
	| update-context | functional-000838                                        | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | update-context                                           |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |         |                     |                     |
	| update-context | functional-000838                                        | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | update-context                                           |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |         |                     |                     |
	| update-context | functional-000838                                        | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | update-context                                           |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |                   |         |                     |                     |
	| image          | functional-000838 image ls                               | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | --format short                                           |                   |                   |         |                     |                     |
	| image          | functional-000838 image ls                               | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | --format yaml                                            |                   |                   |         |                     |                     |
	| ssh            | functional-000838 ssh pgrep                              | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT |                     |
	|                | buildkitd                                                |                   |                   |         |                     |                     |
	| image          | functional-000838 image build -t                         | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | localhost/my-image:functional-000838                     |                   |                   |         |                     |                     |
	|                | testdata\build                                           |                   |                   |         |                     |                     |
	| image          | functional-000838 image ls                               | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	| image          | functional-000838 image ls                               | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | --format json                                            |                   |                   |         |                     |                     |
	| image          | functional-000838 image ls                               | functional-000838 | minikube8\jenkins | v1.27.1 | 25 Oct 22 00:14 GMT | 25 Oct 22 00:14 GMT |
	|                | --format table                                           |                   |                   |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 00:14:19
	Running on machine: minikube8
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 00:14:19.481508   12224 out.go:296] Setting OutFile to fd 700 ...
	I1025 00:14:19.540111   12224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 00:14:19.540111   12224 out.go:309] Setting ErrFile to fd 860...
	I1025 00:14:19.540111   12224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 00:14:19.560127   12224 out.go:303] Setting JSON to false
	I1025 00:14:19.562113   12224 start.go:116] hostinfo: {"hostname":"minikube8","uptime":6504,"bootTime":1666650355,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 00:14:19.563117   12224 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 00:14:19.567111   12224 out.go:177] * [functional-000838] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 00:14:19.571126   12224 notify.go:220] Checking for updates...
	I1025 00:14:19.573103   12224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 00:14:19.576110   12224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 00:14:19.578114   12224 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 00:14:19.581111   12224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 00:14:19.587118   12224 config.go:180] Loaded profile config "functional-000838": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 00:14:19.587118   12224 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 00:14:19.879247   12224 docker.go:137] docker version: linux-20.10.17
	I1025 00:14:19.887554   12224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 00:14:20.588007   12224 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-10-25 00:14:20.0839981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 00:14:20.591987   12224 out.go:177] * Using the docker driver based on existing profile
	I1025 00:14:20.594022   12224 start.go:282] selected driver: docker
	I1025 00:14:20.595027   12224 start.go:808] validating driver "docker" against &{Name:functional-000838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-000838 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 00:14:20.595027   12224 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 00:14:20.618999   12224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 00:14:21.331304   12224 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-10-25 00:14:20.8408039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 00:14:21.399733   12224 cni.go:95] Creating CNI manager for ""
	I1025 00:14:21.399733   12224 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 00:14:21.399733   12224 start_flags.go:317] config:
	{Name:functional-000838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-000838 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tru
e storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 00:14:21.404747   12224 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-10-25 00:09:18 UTC, end at Tue 2022-10-25 00:48:43 UTC. --
	Oct 25 00:12:08 functional-000838 dockerd[9081]: time="2022-10-25T00:12:08.977068300Z" level=info msg="Loading containers: done."
	Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.064063700Z" level=info msg="Docker daemon" commit=e42327a graphdriver(s)=overlay2 version=20.10.18
	Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.064262300Z" level=info msg="Daemon has completed initialization"
	Oct 25 00:12:09 functional-000838 systemd[1]: Started Docker Application Container Engine.
	Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.125288600Z" level=info msg="API listen on [::]:2376"
	Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.134632900Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 25 00:12:09 functional-000838 dockerd[9081]: time="2022-10-25T00:12:09.836627300Z" level=error msg="Failed to compute size of container rootfs e9479777ed36c57265f8a4fad9798c95c2d9867b8136889fee854abd10442c98: mount does not exist"
	Oct 25 00:12:10 functional-000838 dockerd[9081]: time="2022-10-25T00:12:10.420334000Z" level=error msg="981d60152834a5ad4410dbf945579fb4927668b816d88136fbdf62a7dc3bba7b cleanup: failed to delete container from containerd: no such container"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.141245400Z" level=info msg="ignoring event" container=a8aa3370c142f36bd0779a2d40a176f2e5c19584ced48c180f87547c86788dd0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.339783200Z" level=info msg="ignoring event" container=399c046a7b6a950e8d0a432671268f11e90395eb8e8a7db942a811169396b615 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.339851400Z" level=info msg="ignoring event" container=ab767e575f80d59c66f2274ae2835f1e55f8e3181a6af2163563f4561f75f6ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.439286600Z" level=info msg="ignoring event" container=665bfac058865946be5e1082a1d6870b5d78fb13c429eb3e081bab8f527485cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.440757300Z" level=info msg="ignoring event" container=f0a04fa890e99ff35febc2e0c7d4dd0473d59dc67b8eb4b8e8ed3babe058ccd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.442000900Z" level=info msg="ignoring event" container=6a5b494850dfbf5f1539b19a54ca8142b427bda9902fbc18c26aa5a8041211c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.442079600Z" level=info msg="ignoring event" container=473c9aff1d582303688f909011d72fd0d42b57e3cba9090382c6bc1593db2079 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.445752500Z" level=info msg="ignoring event" container=bffe81aadcea1811c85eab2ab547df80aa32e96c1ed423976163896ef303a90c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.445821000Z" level=info msg="ignoring event" container=24867b2a72b4564c599a7de438eb2ec6334a0a49a9e9e7a2c7c045bfd6301693 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.538811800Z" level=info msg="ignoring event" container=38b9eed413557adb3ca54bd3f50a9601f4df9517b0d19dcb92ec2539eb4d4013 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:18 functional-000838 dockerd[9081]: time="2022-10-25T00:12:18.969369700Z" level=info msg="ignoring event" container=b31e2dfc12220919014929ee746ac0c213cf9ed542646598c1758de6ef8429ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:27 functional-000838 dockerd[9081]: time="2022-10-25T00:12:27.274263000Z" level=info msg="ignoring event" container=2e47cf062bdda82b2a85ebcecf4ca93e96ba820a7dd507b084da61fb02aa2806 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:12:41 functional-000838 dockerd[9081]: time="2022-10-25T00:12:41.041363400Z" level=info msg="ignoring event" container=24e50a9cd0e62f412c05e84da6b21fbffdef92dbcc3f9a64cb4aa630aa3cd929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:13:51 functional-000838 dockerd[9081]: time="2022-10-25T00:13:51.848802900Z" level=info msg="ignoring event" container=b191d41a32e23f7a2934dd9918205a78800bec8959920abf8ff0898df10ed2ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:13:54 functional-000838 dockerd[9081]: time="2022-10-25T00:13:54.240019600Z" level=info msg="ignoring event" container=f41be73d98ce07a145d2bba9403a5c3ccefa15215bbe0df9f105706a949bca4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:14:46 functional-000838 dockerd[9081]: time="2022-10-25T00:14:46.552662100Z" level=info msg="ignoring event" container=d0986a7e6203ac1c997ec35509428ee9bec143929a8f5b331eb37762213e8e53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 00:14:47 functional-000838 dockerd[9081]: time="2022-10-25T00:14:47.780500900Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	4a3de2d01161a       mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04                   33 minutes ago      Running             mysql                     0                   77361eafd49ce
	5b1a474395471       nginx@sha256:5ffb682b98b0362b66754387e86b0cd31a5cb7123e49e7f6f6617690900d20b2                   34 minutes ago      Running             myfrontend                0                   d862dd2690799
	214cc39686ef2       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   34 minutes ago      Running             echoserver                0                   06ead6c5c54a6
	d97f025bc5b7e       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   34 minutes ago      Running             echoserver                0                   54e4350fdd784
	5e964d76932be       nginx@sha256:bffb4330be734e3268087e28ca51f6ae926f7d4406c7f5b5ab50c5e22570dc32                   35 minutes ago      Running             nginx                     0                   8c59b60228f4c
	36483f466c3d6       beaaf00edd38a                                                                                   36 minutes ago      Running             kube-proxy                5                   9539d2af31e22
	d0d2ae8d106f2       5185b96f0becf                                                                                   36 minutes ago      Running             coredns                   4                   075d4f429d199
	d6e59b1da9b5a       6e38f40d628db                                                                                   36 minutes ago      Running             storage-provisioner       4                   cecc96e23185a
	e7c14f019cccc       0346dbd74bcb9                                                                                   36 minutes ago      Running             kube-apiserver            0                   10e3541eba0a6
	c9582e744ed2b       a8a176a5d5d69                                                                                   36 minutes ago      Running             etcd                      5                   6624fd49b9e1f
	895c338d258d2       6039992312758                                                                                   36 minutes ago      Running             kube-controller-manager   4                   31c417f2dfe56
	575409fc1b630       6d23ec0e8b87e                                                                                   36 minutes ago      Running             kube-scheduler            4                   aa4d754227f59
	bffe81aadcea1       6039992312758                                                                                   36 minutes ago      Exited              kube-controller-manager   3                   665bfac058865
	b31e2dfc12220       6d23ec0e8b87e                                                                                   36 minutes ago      Exited              kube-scheduler            3                   ab767e575f80d
	24867b2a72b45       beaaf00edd38a                                                                                   36 minutes ago      Exited              kube-proxy                4                   399c046a7b6a9
	38b9eed413557       a8a176a5d5d69                                                                                   36 minutes ago      Exited              etcd                      4                   6a5b494850dfb
	2e47cf062bdda       5185b96f0becf                                                                                   36 minutes ago      Exited              coredns                   3                   a8aa3370c142f
	981d60152834a       6e38f40d628db                                                                                   36 minutes ago      Created             storage-provisioner       3                   7158d73f202d6
	
	* 
	* ==> coredns [2e47cf062bdd] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 9164478859933691884.4647672910462180950. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 9164478859933691884.4647672910462180950. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [d0d2ae8d106f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               functional-000838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-000838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4
	                    minikube.k8s.io/name=functional-000838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_10_25T00_09_55_0700
	                    minikube.k8s.io/version=v1.27.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 25 Oct 2022 00:09:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-000838
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 25 Oct 2022 00:48:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 25 Oct 2022 00:46:15 +0000   Tue, 25 Oct 2022 00:09:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 25 Oct 2022 00:46:15 +0000   Tue, 25 Oct 2022 00:09:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 25 Oct 2022 00:46:15 +0000   Tue, 25 Oct 2022 00:09:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 25 Oct 2022 00:46:15 +0000   Tue, 25 Oct 2022 00:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-000838
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 18f31d64397c45b9b9d6ac880da4e8a3
	  System UUID:                18f31d64397c45b9b9d6ac880da4e8a3
	  Boot ID:                    67927c6c-d6bd-41ca-86c3-f57a6a00a497
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.18
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5fcdfb5cc4-dbp6x                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     hello-node-connect-6458c8fb6f-dqbjm          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     mysql-596b7fcdbf-zfh68                       600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     34m
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  kube-system                 coredns-565d847f94-4xdpf                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-000838                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-000838             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36m
	  kube-system                 kube-controller-manager-functional-000838    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-pr4lp                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-000838             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37m                kube-proxy       
	  Normal  Starting                 36m                kube-proxy       
	  Normal  Starting                 38m                kube-proxy       
	  Normal  NodeHasSufficientMemory  39m (x6 over 39m)  kubelet          Node functional-000838 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m (x5 over 39m)  kubelet          Node functional-000838 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m (x5 over 39m)  kubelet          Node functional-000838 status is now: NodeHasSufficientPID
	  Normal  Starting                 38m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  38m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    38m                kubelet          Node functional-000838 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet          Node functional-000838 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m                kubelet          Node functional-000838 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                38m                kubelet          Node functional-000838 status is now: NodeReady
	  Normal  RegisteredNode           38m                node-controller  Node functional-000838 event: Registered Node functional-000838 in Controller
	  Normal  Starting                 37m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37m (x8 over 37m)  kubelet          Node functional-000838 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37m (x8 over 37m)  kubelet          Node functional-000838 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37m (x7 over 37m)  kubelet          Node functional-000838 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           37m                node-controller  Node functional-000838 event: Registered Node functional-000838 in Controller
	  Normal  Starting                 36m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  36m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  36m (x8 over 36m)  kubelet          Node functional-000838 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36m (x8 over 36m)  kubelet          Node functional-000838 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36m (x7 over 36m)  kubelet          Node functional-000838 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35m                node-controller  Node functional-000838 event: Registered Node functional-000838 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct25 00:23] WSL2: Performing memory compaction.
	[Oct25 00:24] WSL2: Performing memory compaction.
	[Oct25 00:25] WSL2: Performing memory compaction.
	[Oct25 00:26] WSL2: Performing memory compaction.
	[Oct25 00:27] WSL2: Performing memory compaction.
	[Oct25 00:28] WSL2: Performing memory compaction.
	[Oct25 00:30] WSL2: Performing memory compaction.
	[Oct25 00:31] WSL2: Performing memory compaction.
	[Oct25 00:32] WSL2: Performing memory compaction.
	[Oct25 00:33] WSL2: Performing memory compaction.
	[Oct25 00:34] WSL2: Performing memory compaction.
	[Oct25 00:35] WSL2: Performing memory compaction.
	[Oct25 00:36] WSL2: Performing memory compaction.
	[Oct25 00:37] WSL2: Performing memory compaction.
	[Oct25 00:38] WSL2: Performing memory compaction.
	[Oct25 00:39] WSL2: Performing memory compaction.
	[Oct25 00:40] WSL2: Performing memory compaction.
	[Oct25 00:41] WSL2: Performing memory compaction.
	[Oct25 00:42] WSL2: Performing memory compaction.
	[Oct25 00:43] WSL2: Performing memory compaction.
	[Oct25 00:44] WSL2: Performing memory compaction.
	[Oct25 00:45] WSL2: Performing memory compaction.
	[Oct25 00:46] WSL2: Performing memory compaction.
	[Oct25 00:47] WSL2: Performing memory compaction.
	[Oct25 00:48] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [38b9eed41355] <==
	* {"level":"info","ts":"2022-10-25T00:12:14.937Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-10-25T00:12:14.937Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-10-25T00:12:14.937Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-10-25T00:12:15.949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2022-10-25T00:12:15.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-10-25T00:12:15.955Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-000838 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-10-25T00:12:15.955Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-25T00:12:15.955Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-25T00:12:15.958Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-10-25T00:12:15.959Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-10-25T00:12:15.960Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-10-25T00:12:15.960Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-10-25T00:12:18.136Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-10-25T00:12:18.136Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-000838","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/10/25 00:12:18 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/10/25 00:12:18 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-10-25T00:12:18.140Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-10-25T00:12:18.235Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-10-25T00:12:18.237Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-10-25T00:12:18.237Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-000838","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [c9582e744ed2] <==
	* {"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.5752787s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13520"}
	{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.1777483s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2022-10-25T00:15:15.435Z","caller":"traceutil/trace.go:171","msg":"trace[643986308] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:826; }","duration":"1.7821465s","start":"2022-10-25T00:15:13.653Z","end":"2022-10-25T00:15:15.435Z","steps":["trace[643986308] 'range keys from in-memory index tree'  (duration: 1.7817854s)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T00:15:15.435Z","caller":"traceutil/trace.go:171","msg":"trace[1827885195] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:826; }","duration":"1.575417s","start":"2022-10-25T00:15:13.860Z","end":"2022-10-25T00:15:15.435Z","steps":["trace[1827885195] 'range keys from in-memory index tree'  (duration: 1.5749272s)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T00:15:13.653Z","time spent":"1.7823187s","remote":"127.0.0.1:60270","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T00:15:13.860Z","time spent":"1.5754913s","remote":"127.0.0.1:60204","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13544,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2022-10-25T00:15:15.435Z","caller":"traceutil/trace.go:171","msg":"trace[997297366] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:826; }","duration":"2.1778854s","start":"2022-10-25T00:15:13.257Z","end":"2022-10-25T00:15:15.435Z","steps":["trace[997297366] 'range keys from in-memory index tree'  (duration: 2.1776036s)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T00:15:15.435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T00:15:13.257Z","time spent":"2.1780314s","remote":"127.0.0.1:60200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1141,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-10-25T00:20:39.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.4605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T00:20:39.853Z","caller":"traceutil/trace.go:171","msg":"trace[807629467] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1060; }","duration":"104.6761ms","start":"2022-10-25T00:20:39.748Z","end":"2022-10-25T00:20:39.853Z","steps":["trace[807629467] 'agreement among raft nodes before linearized reading'  (duration: 89.9094ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T00:22:35.324Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
	{"level":"info","ts":"2022-10-25T00:22:35.325Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":932,"took":"1.1828ms"}
	{"level":"warn","ts":"2022-10-25T00:26:22.850Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.3562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T00:26:22.850Z","caller":"traceutil/trace.go:171","msg":"trace[1400951757] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1301; }","duration":"102.5573ms","start":"2022-10-25T00:26:22.748Z","end":"2022-10-25T00:26:22.850Z","steps":["trace[1400951757] 'agreement among raft nodes before linearized reading'  (duration: 93.474ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T00:26:22.850Z","caller":"traceutil/trace.go:171","msg":"trace[757707022] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"101.2465ms","start":"2022-10-25T00:26:22.749Z","end":"2022-10-25T00:26:22.850Z","steps":["trace[757707022] 'process raft request'  (duration: 92.1235ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T00:27:35.341Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1143}
	{"level":"info","ts":"2022-10-25T00:27:35.342Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1143,"took":"700.4µs"}
	{"level":"info","ts":"2022-10-25T00:32:35.356Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1353}
	{"level":"info","ts":"2022-10-25T00:32:35.357Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1353,"took":"602.4µs"}
	{"level":"info","ts":"2022-10-25T00:37:35.374Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1564}
	{"level":"info","ts":"2022-10-25T00:37:35.376Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1564,"took":"913µs"}
	{"level":"info","ts":"2022-10-25T00:42:35.397Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1774}
	{"level":"info","ts":"2022-10-25T00:42:35.398Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1774,"took":"481.5µs"}
	{"level":"info","ts":"2022-10-25T00:47:35.419Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1984}
	{"level":"info","ts":"2022-10-25T00:47:35.420Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1984,"took":"572.1µs"}
	
	* 
	* ==> kernel <==
	*  00:48:44 up 54 min,  0 users,  load average: 0.27, 0.42, 0.62
	Linux functional-000838 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [e7c14f019ccc] <==
	* I1025 00:14:03.260751       1 trace.go:205] Trace[1714950858]: "List(recursive=true) etcd3" audit-id:a4d00c04-2889-4a12-8475-c756ef3cd8d7,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (25-Oct-2022 00:14:02.659) (total time: 601ms):
	Trace[1714950858]: [601.3483ms] [601.3483ms] END
	I1025 00:14:03.261396       1 trace.go:205] Trace[271097132]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:a4d00c04-2889-4a12-8475-c756ef3cd8d7,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:14:02.659) (total time: 602ms):
	Trace[271097132]: ---"Listing from storage done" 601ms (00:14:03.260)
	Trace[271097132]: [602.0261ms] [602.0261ms] END
	I1025 00:14:16.704975       1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.97.155.131]
	I1025 00:14:44.596214       1 trace.go:205] Trace[444787370]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints (25-Oct-2022 00:14:42.552) (total time: 2043ms):
	Trace[444787370]: ---"Txn call finished" err:<nil> 2038ms (00:14:44.595)
	Trace[444787370]: [2.0438564s] [2.0438564s] END
	I1025 00:14:44.596911       1 trace.go:205] Trace[1470228253]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:1f7603d5-9a51-4cc3-82a6-b403934f758f,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:14:42.863) (total time: 1733ms):
	Trace[1470228253]: ---"About to write a response" 1732ms (00:14:44.596)
	Trace[1470228253]: [1.7331419s] [1.7331419s] END
	I1025 00:14:44.598763       1 trace.go:205] Trace[538812282]: "List(recursive=true) etcd3" audit-id:25aa4187-f30b-4629-9f02-93bb6f8876a9,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (25-Oct-2022 00:14:42.849) (total time: 1749ms):
	Trace[538812282]: [1.7494914s] [1.7494914s] END
	I1025 00:14:44.599661       1 trace.go:205] Trace[875701398]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:25aa4187-f30b-4629-9f02-93bb6f8876a9,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:14:42.849) (total time: 1750ms):
	Trace[875701398]: ---"Listing from storage done" 1749ms (00:14:44.598)
	Trace[875701398]: [1.7504719s] [1.7504719s] END
	I1025 00:15:15.437032       1 trace.go:205] Trace[1739597116]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:973a0fd3-63ce-40df-8d4d-eb73e94e9513,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:15:13.256) (total time: 2180ms):
	Trace[1739597116]: ---"About to write a response" 2180ms (00:15:15.436)
	Trace[1739597116]: [2.1801976s] [2.1801976s] END
	I1025 00:15:15.437190       1 trace.go:205] Trace[1998770013]: "List(recursive=true) etcd3" audit-id:88010bbd-35a5-4e13-99a1-685e2875360e,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (25-Oct-2022 00:15:13.859) (total time: 1578ms):
	Trace[1998770013]: [1.5780632s] [1.5780632s] END
	I1025 00:15:15.437920       1 trace.go:205] Trace[2047773140]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:88010bbd-35a5-4e13-99a1-685e2875360e,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 00:15:13.859) (total time: 1578ms):
	Trace[2047773140]: ---"Listing from storage done" 1578ms (00:15:15.437)
	Trace[2047773140]: [1.5788195s] [1.5788195s] END
	
	* 
	* ==> kube-controller-manager [895c338d258d] <==
	* I1025 00:12:54.540868       1 range_allocator.go:166] Starting range CIDR allocator
	I1025 00:12:54.540884       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1025 00:12:54.540903       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1025 00:12:54.636569       1 shared_informer.go:262] Caches are synced for TTL
	I1025 00:12:54.636697       1 shared_informer.go:262] Caches are synced for taint
	I1025 00:12:54.636668       1 shared_informer.go:262] Caches are synced for daemon sets
	I1025 00:12:54.636800       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W1025 00:12:54.636926       1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-000838. Assuming now as a timestamp.
	I1025 00:12:54.637060       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1025 00:12:54.636806       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1025 00:12:54.637265       1 taint_manager.go:209] "Sending events to api server"
	I1025 00:12:54.637466       1 event.go:294] "Event occurred" object="functional-000838" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-000838 event: Registered Node functional-000838 in Controller"
	I1025 00:12:54.637557       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 00:12:54.637602       1 shared_informer.go:262] Caches are synced for GC
	I1025 00:12:54.637723       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 00:12:54.749034       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 00:12:54.757591       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 00:12:54.757695       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 00:13:15.713591       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I1025 00:13:16.973837       1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
	I1025 00:13:17.049295       1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-dqbjm"
	I1025 00:13:22.140438       1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
	I1025 00:13:22.235336       1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-dbp6x"
	I1025 00:14:16.837703       1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
	I1025 00:14:16.851034       1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-zfh68"
	
	* 
	* ==> kube-controller-manager [bffe81aadcea] <==
	* 
	* 
	* ==> kube-proxy [24867b2a72b4] <==
	* E1025 00:12:14.549983       1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I1025 00:12:14.635948       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 00:12:14.639920       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 00:12:14.643460       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 00:12:14.646798       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 00:12:14.650151       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1025 00:12:14.735657       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-000838": dial tcp 192.168.49.2:8441: connect: connection refused
	E1025 00:12:15.794808       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-000838": dial tcp 192.168.49.2:8441: connect: connection refused
	E1025 00:12:18.037197       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-000838": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-proxy [36483f466c3d] <==
	* I1025 00:12:41.540400       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 00:12:41.545533       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 00:12:41.635618       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 00:12:41.639847       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 00:12:41.642964       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1025 00:12:41.836110       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I1025 00:12:41.836268       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I1025 00:12:41.837492       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 00:12:41.947094       1 server_others.go:206] "Using iptables Proxier"
	I1025 00:12:41.947332       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1025 00:12:41.947360       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1025 00:12:41.947377       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1025 00:12:41.947404       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 00:12:41.947863       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 00:12:41.948463       1 server.go:661] "Version info" version="v1.25.3"
	I1025 00:12:41.948683       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 00:12:41.949677       1 config.go:444] "Starting node config controller"
	I1025 00:12:41.949885       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 00:12:41.950180       1 config.go:226] "Starting endpoint slice config controller"
	I1025 00:12:41.950428       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 00:12:41.950245       1 config.go:317] "Starting service config controller"
	I1025 00:12:41.950837       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 00:12:42.050002       1 shared_informer.go:262] Caches are synced for node config
	I1025 00:12:42.051802       1 shared_informer.go:262] Caches are synced for service config
	I1025 00:12:42.051918       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [575409fc1b63] <==
	* I1025 00:12:33.972207       1 serving.go:348] Generated self-signed cert in-memory
	W1025 00:12:38.839040       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 00:12:38.839863       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 00:12:38.840014       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 00:12:38.840038       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 00:12:39.039502       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1025 00:12:39.039649       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 00:12:39.041788       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 00:12:39.042494       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 00:12:39.042600       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 00:12:39.042986       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 00:12:39.142893       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [b31e2dfc1222] <==
	* I1025 00:12:17.038848       1 serving.go:348] Generated self-signed cert in-memory
	W1025 00:12:18.926252       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1025 00:12:18.926478       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 00:12:18.926492       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 00:12:18.936029       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1025 00:12:18.936139       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 00:12:18.938074       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 00:12:18.938204       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 00:12:18.938271       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1025 00:12:18.938626       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 00:12:18.938777       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 00:12:18.938791       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 00:12:18.938827       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1025 00:12:18.938833       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1025 00:12:18.939676       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-10-25 00:09:18 UTC, end at Tue 2022-10-25 00:48:44 UTC. --
	Oct 25 00:13:26 functional-000838 kubelet[11340]: I1025 00:13:26.651326   11340 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="06ead6c5c54a68b62965e310300a50a30bc33bed5e06716b1210b4a45192f9e2"
	Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.456251   11340 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95p55\" (UniqueName: \"kubernetes.io/projected/bfe72ebb-3731-474d-84b6-94b684b4df81-kube-api-access-95p55\") pod \"bfe72ebb-3731-474d-84b6-94b684b4df81\" (UID: \"bfe72ebb-3731-474d-84b6-94b684b4df81\") "
	Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.456513   11340 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/bfe72ebb-3731-474d-84b6-94b684b4df81-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\") pod \"bfe72ebb-3731-474d-84b6-94b684b4df81\" (UID: \"bfe72ebb-3731-474d-84b6-94b684b4df81\") "
	Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.456609   11340 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe72ebb-3731-474d-84b6-94b684b4df81-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a" (OuterVolumeSpecName: "mypd") pod "bfe72ebb-3731-474d-84b6-94b684b4df81" (UID: "bfe72ebb-3731-474d-84b6-94b684b4df81"). InnerVolumeSpecName "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.460145   11340 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe72ebb-3731-474d-84b6-94b684b4df81-kube-api-access-95p55" (OuterVolumeSpecName: "kube-api-access-95p55") pod "bfe72ebb-3731-474d-84b6-94b684b4df81" (UID: "bfe72ebb-3731-474d-84b6-94b684b4df81"). InnerVolumeSpecName "kube-api-access-95p55". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.556931   11340 reconciler.go:399] "Volume detached for volume \"pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\" (UniqueName: \"kubernetes.io/host-path/bfe72ebb-3731-474d-84b6-94b684b4df81-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\") on node \"functional-000838\" DevicePath \"\""
	Oct 25 00:13:55 functional-000838 kubelet[11340]: I1025 00:13:55.557062   11340 reconciler.go:399] "Volume detached for volume \"kube-api-access-95p55\" (UniqueName: \"kubernetes.io/projected/bfe72ebb-3731-474d-84b6-94b684b4df81-kube-api-access-95p55\") on node \"functional-000838\" DevicePath \"\""
	Oct 25 00:13:56 functional-000838 kubelet[11340]: I1025 00:13:56.459734   11340 scope.go:115] "RemoveContainer" containerID="b191d41a32e23f7a2934dd9918205a78800bec8959920abf8ff0898df10ed2ac"
	Oct 25 00:13:57 functional-000838 kubelet[11340]: I1025 00:13:57.752705   11340 topology_manager.go:205] "Topology Admit Handler"
	Oct 25 00:13:57 functional-000838 kubelet[11340]: E1025 00:13:57.753027   11340 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="bfe72ebb-3731-474d-84b6-94b684b4df81" containerName="myfrontend"
	Oct 25 00:13:57 functional-000838 kubelet[11340]: I1025 00:13:57.753135   11340 memory_manager.go:345] "RemoveStaleState removing state" podUID="bfe72ebb-3731-474d-84b6-94b684b4df81" containerName="myfrontend"
	Oct 25 00:13:58 functional-000838 kubelet[11340]: I1025 00:13:58.053255   11340 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft54w\" (UniqueName: \"kubernetes.io/projected/5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b-kube-api-access-ft54w\") pod \"sp-pod\" (UID: \"5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b\") " pod="default/sp-pod"
	Oct 25 00:13:58 functional-000838 kubelet[11340]: I1025 00:13:58.053417   11340 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\" (UniqueName: \"kubernetes.io/host-path/5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b-pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a\") pod \"sp-pod\" (UID: \"5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b\") " pod="default/sp-pod"
	Oct 25 00:13:58 functional-000838 kubelet[11340]: I1025 00:13:58.549137   11340 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bfe72ebb-3731-474d-84b6-94b684b4df81 path="/var/lib/kubelet/pods/bfe72ebb-3731-474d-84b6-94b684b4df81/volumes"
	Oct 25 00:13:59 functional-000838 kubelet[11340]: I1025 00:13:59.997927   11340 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d862dd26907998346a8a714dcaeca5ed7358430eb37f8d9a197323354d8971f5"
	Oct 25 00:14:16 functional-000838 kubelet[11340]: I1025 00:14:16.862276   11340 topology_manager.go:205] "Topology Admit Handler"
	Oct 25 00:14:16 functional-000838 kubelet[11340]: I1025 00:14:16.864685   11340 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlwbz\" (UniqueName: \"kubernetes.io/projected/e4191e78-ae83-4272-9aa0-dd3c9d287cf5-kube-api-access-dlwbz\") pod \"mysql-596b7fcdbf-zfh68\" (UID: \"e4191e78-ae83-4272-9aa0-dd3c9d287cf5\") " pod="default/mysql-596b7fcdbf-zfh68"
	Oct 25 00:14:18 functional-000838 kubelet[11340]: I1025 00:14:18.323660   11340 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="77361eafd49cea60cb995c77b6e5394a0b5c0da280359b5255b64016d2d21909"
	Oct 25 00:17:30 functional-000838 kubelet[11340]: W1025 00:17:30.775959   11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Oct 25 00:22:30 functional-000838 kubelet[11340]: W1025 00:22:30.775606   11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Oct 25 00:27:30 functional-000838 kubelet[11340]: W1025 00:27:30.781090   11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Oct 25 00:32:30 functional-000838 kubelet[11340]: W1025 00:32:30.781452   11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Oct 25 00:37:30 functional-000838 kubelet[11340]: W1025 00:37:30.782935   11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Oct 25 00:42:30 functional-000838 kubelet[11340]: W1025 00:42:30.850375   11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Oct 25 00:47:30 functional-000838 kubelet[11340]: W1025 00:47:30.787218   11340 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [981d60152834] <==
	* 
	* 
	* ==> storage-provisioner [d6e59b1da9b5] <==
	* I1025 00:12:40.466263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 00:12:40.556682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 00:12:40.556956       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 00:12:58.068521       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 00:12:58.068895       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-000838_877d2184-e0b3-49f6-a581-c2014f095838!
	I1025 00:12:58.068922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"15a9ecb3-10ed-4a9e-9c32-2b27a682c62c", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-000838_877d2184-e0b3-49f6-a581-c2014f095838 became leader
	I1025 00:12:58.169937       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-000838_877d2184-e0b3-49f6-a581-c2014f095838!
	I1025 00:13:15.713292       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1025 00:13:15.713566       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d3344b57-bdc7-476c-9e6d-a3a302f8bda8 382 0 2022-10-25 00:10:13 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-10-25 00:10:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  74af0df9-4673-4b35-9b41-e6a28e4a469a 640 0 2022-10-25 00:13:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-10-25 00:13:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-10-25 00:13:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1025 00:13:15.714234       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"74af0df9-4673-4b35-9b41-e6a28e4a469a", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1025 00:13:15.714507       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a" provisioned
	I1025 00:13:15.714535       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1025 00:13:15.714545       1 volume_store.go:212] Trying to save persistentvolume "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a"
	I1025 00:13:15.740558       1 volume_store.go:219] persistentvolume "pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a" saved
	I1025 00:13:15.740999       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"74af0df9-4673-4b35-9b41-e6a28e4a469a", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-74af0df9-4673-4b35-9b41-e6a28e4a469a
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-000838 -n functional-000838
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-000838 -n functional-000838: (1.5464067s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-000838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-000838 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-000838 describe pod : exit status 1 (168.9927ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-000838 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2125.15s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-windows-amd64.exe license

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Done: out/minikube-windows-amd64.exe license: (1.639558s)
functional_test.go:2218: (dbg) Run:  ls ./licenses
functional_test.go:2218: (dbg) Non-zero exit: ls ./licenses: exec: "ls": executable file not found in %PATH% (0s)
functional_test.go:2220: command "" failed: exec: "ls": executable file not found in %PATH%
--- FAIL: TestFunctional/parallel/License (1.66s)

                                                
                                    
x
+
TestPause/serial/Pause (42.44s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-012456 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p pause-012456 --alsologtostderr -v=5: exit status 80 (6.7623417s)

                                                
                                                
-- stdout --
	* Pausing node pause-012456 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 01:28:31.505558   11388 out.go:296] Setting OutFile to fd 440 ...
	I1025 01:28:31.575069   11388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:28:31.575069   11388 out.go:309] Setting ErrFile to fd 812...
	I1025 01:28:31.575069   11388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:28:31.594081   11388 out.go:303] Setting JSON to false
	I1025 01:28:31.594081   11388 mustload.go:65] Loading cluster: pause-012456
	I1025 01:28:31.595089   11388 config.go:180] Loaded profile config "pause-012456": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:28:31.616064   11388 cli_runner.go:164] Run: docker container inspect pause-012456 --format={{.State.Status}}
	I1025 01:28:31.843065   11388 host.go:66] Checking if "pause-012456" exists ...
	I1025 01:28:31.854071   11388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-012456
	I1025 01:28:32.099077   11388 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks
:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.27.0-1666206003-15159/minikube-v1.27.0-1666206003-15159-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.27.0-1666206003-15159-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) me
mory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube8:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-012456 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 socket-vmnet-client-path:/opt/socket_vmnet/bin/socket_vmnet_client socket-vmnet-path:/var/run/socket_vmnet ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtu
alboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 01:28:32.379073   11388 out.go:177] * Pausing node pause-012456 ... 
	I1025 01:28:32.567423   11388 host.go:66] Checking if "pause-012456" exists ...
	I1025 01:28:32.592424   11388 ssh_runner.go:195] Run: systemctl --version
	I1025 01:28:32.605993   11388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-012456
	I1025 01:28:32.855442   11388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64560 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\pause-012456\id_rsa Username:docker}
	I1025 01:28:33.012337   11388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:28:33.043637   11388 pause.go:51] kubelet running: true
	I1025 01:28:33.056914   11388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 01:28:33.430089   11388 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I1025 01:28:33.492906   11388 docker.go:460] Pausing containers: [c6f5fed8d5a9 e61a15ac0a82 96156d53ee28 162f68bfd3ce 80a120881748 48c9607ccbd6 805d7017a1e7 9a61857de96a fd7c603c9b74 839db953c013 21921b116293 1aadf09053d4 b8028b4301dc 0e35b0108201]
	I1025 01:28:33.503675   11388 ssh_runner.go:195] Run: docker pause c6f5fed8d5a9 e61a15ac0a82 96156d53ee28 162f68bfd3ce 80a120881748 48c9607ccbd6 805d7017a1e7 9a61857de96a fd7c603c9b74 839db953c013 21921b116293 1aadf09053d4 b8028b4301dc 0e35b0108201
	I1025 01:28:36.375328   11388 ssh_runner.go:235] Completed: docker pause c6f5fed8d5a9 e61a15ac0a82 96156d53ee28 162f68bfd3ce 80a120881748 48c9607ccbd6 805d7017a1e7 9a61857de96a fd7c603c9b74 839db953c013 21921b116293 1aadf09053d4 b8028b4301dc 0e35b0108201: (2.8716336s)
	I1025 01:28:36.379320   11388 out.go:177] 
	W1025 01:28:36.382311   11388 out.go:239] X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause c6f5fed8d5a9 e61a15ac0a82 96156d53ee28 162f68bfd3ce 80a120881748 48c9607ccbd6 805d7017a1e7 9a61857de96a fd7c603c9b74 839db953c013 21921b116293 1aadf09053d4 b8028b4301dc 0e35b0108201: Process exited with status 1
	stdout:
	c6f5fed8d5a9
	e61a15ac0a82
	96156d53ee28
	162f68bfd3ce
	80a120881748
	48c9607ccbd6
	805d7017a1e7
	fd7c603c9b74
	839db953c013
	21921b116293
	1aadf09053d4
	b8028b4301dc
	0e35b0108201
	
	stderr:
	Error response from daemon: Cannot pause container 9a61857de96a3e8c49802ee9da9ed1c19d357f354f0e1efa685d91a22624558e: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause c6f5fed8d5a9 e61a15ac0a82 96156d53ee28 162f68bfd3ce 80a120881748 48c9607ccbd6 805d7017a1e7 9a61857de96a fd7c603c9b74 839db953c013 21921b116293 1aadf09053d4 b8028b4301dc 0e35b0108201: Process exited with status 1
	stdout:
	c6f5fed8d5a9
	e61a15ac0a82
	96156d53ee28
	162f68bfd3ce
	80a120881748
	48c9607ccbd6
	805d7017a1e7
	fd7c603c9b74
	839db953c013
	21921b116293
	1aadf09053d4
	b8028b4301dc
	0e35b0108201
	
	stderr:
	Error response from daemon: Cannot pause container 9a61857de96a3e8c49802ee9da9ed1c19d357f354f0e1efa685d91a22624558e: OCI runtime pause failed: unable to freeze: unknown
	
	W1025 01:28:36.382311   11388 out.go:239] * 
	* 
	W1025 01:28:37.512550   11388 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_49.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_49.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 01:28:37.765398   11388 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-windows-amd64.exe pause -p pause-012456 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-012456
helpers_test.go:235: (dbg) docker inspect pause-012456:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3",
	        "Created": "2022-10-25T01:26:16.2008334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-25T01:26:18.3793296Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/hosts",
	        "LogPath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3-json.log",
	        "Name": "/pause-012456",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-012456:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-012456",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48-init/diff:/var/lib/docker/overlay2/1d72d69c076943d6cd413bc50b6a474779145c6396136b4aef1829c16f4a6d69/diff:/var/lib/docker/overlay2/2712457ef6b3ec08714d64e5261a9b327c3f8db2156d7a1b493340af804c46f1/diff:/var/lib/docker/overlay2/956ad2e584ed04429b79ab0ee4bdc8977af3fcfbab3cc0ed570922cc07ffd0a6/diff:/var/lib/docker/overlay2/c4f80c5076f71429b4266dc613d1850e7295faded99f05e04fcb13d2cb4d3157/diff:/var/lib/docker/overlay2/18b12a09b44604345877d4490348801b993263f747090a3a48eac835ac323d86/diff:/var/lib/docker/overlay2/6ce1e052ac8d5221cb1978a93a4c4d18c74da80e998b6e54246cdc95997a769f/diff:/var/lib/docker/overlay2/9e6e7c177b550c9c4fc4af8222ccc9bfe5b01fa177f08388c541fde750e4df80/diff:/var/lib/docker/overlay2/c56ad1fbd8fd09ba635cb91b82c303fab8be925f82edac48c47ed2b99f054b36/diff:/var/lib/docker/overlay2/b4a229acad56b83bd9d04813f3f4cf0c8c562169b12ef1e88243f4588d0b28f9/diff:/var/lib/docker/overlay2/56f30b
af9b74a7e6afda16e0f90a1863a3db06b5fec5cf06828152edc0faa420/diff:/var/lib/docker/overlay2/4275e6a6be34231198b756601a3b51a1d8446e8830b1c4037b20370047b88b9e/diff:/var/lib/docker/overlay2/0a9f47913b546daa2d558a978beaaa9e1e7e73a568fa1ee9d198e1e2154d3f75/diff:/var/lib/docker/overlay2/f1895cfb690eaa9bf966dd3f040878344a80c0dc3606dd2d5e67d9495cfa3ff8/diff:/var/lib/docker/overlay2/84335bbaf957cb1942f1d774b817e78297dbe5ffeb7e2e406e7492cf5a720c7e/diff:/var/lib/docker/overlay2/d9a26e65c06347ae6f8f306617639febfee5427dffa6d33a6acb3abfc22092fb/diff:/var/lib/docker/overlay2/a6893072e83e913a455da1f55020a69e4cd75c9ca7b9893e47d184eaf0da806d/diff:/var/lib/docker/overlay2/2d4c8dbcc1a6e63159280d831a4e448df4587dae065b53837a0e735e579361c4/diff:/var/lib/docker/overlay2/6fd2d854ad2aede74411487bcfe2f1fa3c4e1bbfad739455a690a5801c7c9d18/diff:/var/lib/docker/overlay2/d8435d49436e1e6d94054688732a28cdf047031ca600d938ab879a3f72791749/diff:/var/lib/docker/overlay2/618bd9835cc6596945db86c2cd23a6ea6c60992ff42cb8ba7a13f96776d79bb3/diff:/var/lib/d
ocker/overlay2/8e9af4c331a1374dad5f203889fa4953cd3111c705011d2f885ce8a3a04daf2c/diff:/var/lib/docker/overlay2/b8b4d702f888aa572be928e4e449cfaed5da2a045d94f145c0d48b2f838a2dc5/diff:/var/lib/docker/overlay2/6b708706c388c674df30fea4b16deb3b96447089d2a1cd5341ef199bd5dc3c4e/diff:/var/lib/docker/overlay2/f3bab3644fefb2215fd7b4b857958be30f575fd080ec37030b8b970e46155cdc/diff:/var/lib/docker/overlay2/809d38d9cc75c39f4eab1c2c64257e010b66f6dd17717a251371701f51b07237/diff:/var/lib/docker/overlay2/b2fc12e35954dea9baf6e418bbc1b629a71863e855e4373e8d665590cd7cbc54/diff:/var/lib/docker/overlay2/34dcaea23605015741cd4c620ce445c935ca6a08892a5aa15165a8422bb013c0/diff:/var/lib/docker/overlay2/4c362976bdb9f18c68d5c294dc08d7939899992ed5f8bb13ab34f58ec03fcdd6/diff:/var/lib/docker/overlay2/316879c125d7c6ab5ddb970715d730f6a9ea41f2b58da1ac9379b1d528a25970/diff:/var/lib/docker/overlay2/241a6ea1a0e862f8ac9d51e14f03999907acd9030349143120fad52b3c1c2b97/diff:/var/lib/docker/overlay2/c64f861002875793ea9a7d58a0e0b96ad95c3c7fb2874b758d4fb1bc26c
34587/diff:/var/lib/docker/overlay2/9b91106560e299e000b1229f3c2774c8ff0b881dbb4a27b80b89d0287f2f581d/diff:/var/lib/docker/overlay2/48a0a6d3a2a4100e68d167121a7df5a2244821b71406e29d5cc8220307ed9847/diff:/var/lib/docker/overlay2/1f280e54c1637034501f87fed8ca123799984880082b190271d5fa183974cb70/diff:/var/lib/docker/overlay2/8b8d91bd6daf07b06612bec716b08ed3d8032a4caa291548eead78a2b2c7e037/diff:/var/lib/docker/overlay2/b3ab8284e9708da3d4a94f3bd549609f23fcc286b4c1522cdb244344a4957bba/diff:/var/lib/docker/overlay2/7cc92644ec11a70cec25faf398c533eaa555c3a0ab3e783bf6f0cb342f18de20/diff:/var/lib/docker/overlay2/7f44e48c3f9293e16b6fedacc411012e83674000293a110908fcbe7b8aa0f56c/diff:/var/lib/docker/overlay2/7ded7fd7dc10119d3c74efa565ab8580571328086d82d5e795e7adcd3276e653/diff:/var/lib/docker/overlay2/b4654f15c85f235a8a9d5b03067d9aacd8d02569b48170551e8cc1fb340698ad/diff:/var/lib/docker/overlay2/901a06d4c922f4dcb994eec1c950879f560844312e104093523c1f1637594c70/diff:/var/lib/docker/overlay2/0fdbbeb11fdbed96bd80868c62d4c13bf887e7
83043225667d2bde711d03b757/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-012456",
	                "Source": "/var/lib/docker/volumes/pause-012456/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-012456",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-012456",
	                "name.minikube.sigs.k8s.io": "pause-012456",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a6582eedaff24e13771a62e8953d6e7a2f955a07f013fe19da233f0adca1261",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64560"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64561"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64563"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64564"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6a6582eedaff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-012456": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "20521f22f32c",
	                        "pause-012456"
	                    ],
	                    "NetworkID": "215bbf25ac33d2c24e30f8c0b7898eb5d9b9ddd1cf9424c60f9de63e2a4ebba4",
	                    "EndpointID": "880803c180d62462f225f0096de73b6edf45b16261ead8e066449708b0829800",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-012456 -n pause-012456
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-012456 -n pause-012456: exit status 2 (1.6364442s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-012456 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-012456 logs -n 25: (14.4322812s)
helpers_test.go:252: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| node    | add -p multinode-010431        | multinode-010431            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:16 GMT |                     |
	| delete  | -p multinode-010431-m03        | multinode-010431-m03        | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:16 GMT | 25 Oct 22 01:16 GMT |
	| delete  | -p multinode-010431            | multinode-010431            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:16 GMT | 25 Oct 22 01:17 GMT |
	| start   | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:17 GMT | 25 Oct 22 01:19 GMT |
	|         | --memory=2200                  |                             |                   |         |                     |                     |
	|         | --alsologtostderr              |                             |                   |         |                     |                     |
	|         | --wait=true --preload=false    |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                             |                   |         |                     |                     |
	| ssh     | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:19 GMT | 25 Oct 22 01:19 GMT |
	|         | -- docker pull                 |                             |                   |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox    |                             |                   |         |                     |                     |
	| start   | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:19 GMT | 25 Oct 22 01:21 GMT |
	|         | --memory=2200                  |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.6   |                             |                   |         |                     |                     |
	| ssh     | -p test-preload-011708 --      | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:21 GMT | 25 Oct 22 01:21 GMT |
	|         | docker images                  |                             |                   |         |                     |                     |
	| delete  | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:21 GMT | 25 Oct 22 01:21 GMT |
	| start   | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:21 GMT | 25 Oct 22 01:22 GMT |
	|         | --memory=2048 --driver=docker  |                             |                   |         |                     |                     |
	| stop    | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:22 GMT | 25 Oct 22 01:22 GMT |
	|         | --schedule 5m                  |                             |                   |         |                     |                     |
	| ssh     | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:22 GMT | 25 Oct 22 01:22 GMT |
	|         | -- sudo systemctl show         |                             |                   |         |                     |                     |
	|         | minikube-scheduled-stop        |                             |                   |         |                     |                     |
	|         | --no-page                      |                             |                   |         |                     |                     |
	| stop    | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:22 GMT | 25 Oct 22 01:22 GMT |
	|         | --schedule 5s                  |                             |                   |         |                     |                     |
	| delete  | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:23 GMT | 25 Oct 22 01:24 GMT |
	| start   | -p insufficient-storage-012403 | insufficient-storage-012403 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT |                     |
	|         | --memory=2048 --output=json    |                             |                   |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |                   |         |                     |                     |
	| delete  | -p insufficient-storage-012403 | insufficient-storage-012403 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:24 GMT |
	| start   | -p pause-012456 --memory=2048  | pause-012456                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:27 GMT |
	|         | --install-addons=false         |                             |                   |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |                   |         |                     |                     |
	| start   | -p offline-docker-012456       | offline-docker-012456       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:27 GMT |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --memory=2048 --wait=true      |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT |                     |
	|         | --no-kubernetes                |                             |                   |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:28 GMT |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p pause-012456                | pause-012456                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:27 GMT | 25 Oct 22 01:28 GMT |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| delete  | -p offline-docker-012456       | offline-docker-012456       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:27 GMT | 25 Oct 22 01:28 GMT |
	| start   | -p force-systemd-flag-012812   | force-systemd-flag-012812   | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --memory=2048 --force-systemd  |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p stopped-upgrade-012456      | stopped-upgrade-012456      | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --memory=2200                  |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --no-kubernetes                |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| pause   | -p pause-012456                | pause-012456                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --alsologtostderr -v=5         |                             |                   |         |                     |                     |
	|---------|--------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 01:28:24
	Running on machine: minikube8
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 01:28:24.743724   10896 out.go:296] Setting OutFile to fd 1688 ...
	I1025 01:28:24.817987   10896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:28:24.817987   10896 out.go:309] Setting ErrFile to fd 1692...
	I1025 01:28:24.817987   10896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:28:24.846136   10896 out.go:303] Setting JSON to false
	I1025 01:28:24.850065   10896 start.go:116] hostinfo: {"hostname":"minikube8","uptime":10949,"bootTime":1666650355,"procs":163,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 01:28:24.850131   10896 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 01:28:24.895990   10896 out.go:177] * [NoKubernetes-012456] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 01:28:24.899683   10896 notify.go:220] Checking for updates...
	I1025 01:28:24.903334   10896 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:28:24.909717   10896 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 01:28:24.918784   10896 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 01:28:24.922985   10896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1025 01:28:20.232372    2776 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456 returned with exit code 1
	I1025 01:28:20.241376    2776 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "stopped-upgrade-012456": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456: exit status 1
	stdout:
	
	
	stderr:
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: reflect: slice index out of range
	I1025 01:28:20.496170    2776 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.18.0 exists
	I1025 01:28:20.496170    2776 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.18.0" took 3.4182503s
	I1025 01:28:20.498065    2776 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.18.0 succeeded
	I1025 01:28:20.525332    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:20.691268    2776 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.18.0 exists
	I1025 01:28:20.691268    2776 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.18.0" took 3.6133475s
	I1025 01:28:20.691268    2776 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.18.0 succeeded
	I1025 01:28:20.822260    2776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64799 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\stopped-upgrade-012456\id_rsa Username:docker}
	W1025 01:28:20.828276    2776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 01:28:20.828276    2776 retry.go:31] will retry after 360.127272ms: ssh: handshake failed: EOF
	I1025 01:28:21.043805    2776 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.18.0 exists
	I1025 01:28:21.044819    2776 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.18.0" took 3.9668954s
	I1025 01:28:21.044971    2776 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.18.0 succeeded
	I1025 01:28:21.170596    2776 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I1025 01:28:21.171574    2776 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 4.0936497s
	I1025 01:28:21.171574    2776 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I1025 01:28:21.366548    2776 cache.go:156] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.18.0 exists
	I1025 01:28:21.367382    2776 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.18.0" took 4.2894563s
	I1025 01:28:21.367382    2776 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.18.0 succeeded
	I1025 01:28:21.367529    2776 cache.go:87] Successfully saved all images to host disk.
	I1025 01:28:21.391745    2776 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.425297s)
	I1025 01:28:21.416141    2776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:28:21.475943    2776 fix.go:57] fixHost completed within 4.0517026s
	I1025 01:28:21.475943    2776 start.go:83] releasing machines lock for "stopped-upgrade-012456", held for 4.0517026s
	W1025 01:28:21.475943    2776 start.go:603] error starting host: provision: get ssh host-port: get port 22 for "stopped-upgrade-012456": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456: exit status 1
	stdout:
	
	
	stderr:
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: reflect: slice index out of range
	W1025 01:28:21.475943    2776 out.go:239] ! StartHost failed, but will try again: provision: get ssh host-port: get port 22 for "stopped-upgrade-012456": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456: exit status 1
	stdout:
	
	
	stderr:
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: reflect: slice index out of range
	
	I1025 01:28:21.475943    2776 start.go:618] Will try again in 5 seconds ...
	I1025 01:28:24.926380   10896 config.go:180] Loaded profile config "NoKubernetes-012456": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:28:24.927398   10896 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:24.927398   10896 start.go:1603] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1025 01:28:24.927398   10896 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 01:28:25.244799   10896 docker.go:137] docker version: linux-20.10.17
	I1025 01:28:25.250821   10896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:28:25.822209   10896 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:73 SystemTime:2022-10-25 01:28:25.4174688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:28:25.825841   10896 out.go:177] * Using the docker driver based on existing profile
	I1025 01:28:21.373989   10500 pod_ready.go:102] pod "kube-apiserver-pause-012456" in "kube-system" namespace has status "Ready":"False"
	I1025 01:28:22.519902   10500 pod_ready.go:92] pod "kube-apiserver-pause-012456" in "kube-system" namespace has status "Ready":"True"
	I1025 01:28:22.519902   10500 pod_ready.go:81] duration metric: took 8.628986s waiting for pod "kube-apiserver-pause-012456" in "kube-system" namespace to be "Ready" ...
	I1025 01:28:22.519902   10500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-012456" in "kube-system" namespace to be "Ready" ...
	I1025 01:28:22.621484   10500 pod_ready.go:92] pod "kube-controller-manager-pause-012456" in "kube-system" namespace has status "Ready":"True"
	I1025 01:28:22.621535   10500 pod_ready.go:81] duration metric: took 101.632ms waiting for pod "kube-controller-manager-pause-012456" in "kube-system" namespace to be "Ready" ...
	I1025 01:28:22.621567   10500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w6fq5" in "kube-system" namespace to be "Ready" ...
	I1025 01:28:22.646090   10500 pod_ready.go:92] pod "kube-proxy-w6fq5" in "kube-system" namespace has status "Ready":"True"
	I1025 01:28:22.646090   10500 pod_ready.go:81] duration metric: took 24.5232ms waiting for pod "kube-proxy-w6fq5" in "kube-system" namespace to be "Ready" ...
	I1025 01:28:22.646090   10500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-012456" in "kube-system" namespace to be "Ready" ...
	I1025 01:28:24.713678   10500 pod_ready.go:102] pod "kube-scheduler-pause-012456" in "kube-system" namespace has status "Ready":"False"
	I1025 01:28:25.827914   10896 start.go:282] selected driver: docker
	I1025 01:28:25.827914   10896 start.go:808] validating driver "docker" against &{Name:NoKubernetes-012456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-012456 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:28:25.828167   10896 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 01:28:25.846872   10896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:28:26.451614   10896 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:73 SystemTime:2022-10-25 01:28:26.0306936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:28:26.451614   10896 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:26.499070   10896 cni.go:95] Creating CNI manager for ""
	I1025 01:28:26.499070   10896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 01:28:26.499070   10896 start_flags.go:317] config:
	{Name:NoKubernetes-012456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-012456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/so
cket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:28:26.499070   10896 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:26.499070   10896 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:26.505355   10896 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-012456
	I1025 01:28:26.507382   10896 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 01:28:26.510549   10896 out.go:177] * Pulling base image ...
	I1025 01:28:26.481367    2776 start.go:364] acquiring machines lock for stopped-upgrade-012456: {Name:mkbb662a19edef333b9998481371b98f13c104df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:28:26.481753    2776 start.go:368] acquired machines lock for "stopped-upgrade-012456" in 123.1µs
	I1025 01:28:26.481964    2776 start.go:96] Skipping create...Using existing machine configuration
	I1025 01:28:26.481964    2776 fix.go:55] fixHost starting: m01
	I1025 01:28:26.497985    2776 cli_runner.go:164] Run: docker container inspect stopped-upgrade-012456 --format={{.State.Status}}
	I1025 01:28:26.729230    2776 fix.go:103] recreateIfNeeded on stopped-upgrade-012456: state=Running err=<nil>
	W1025 01:28:26.729230    2776 fix.go:129] unexpected machine state, will restart: <nil>
	I1025 01:28:26.732250    2776 out.go:177] * Updating the running docker "stopped-upgrade-012456" container ...
	I1025 01:28:26.512711   10896 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I1025 01:28:26.512711   10896 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	W1025 01:28:26.563161   10896 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1025 01:28:26.563392   10896 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\NoKubernetes-012456\config.json ...
	I1025 01:28:26.745221   10896 image.go:86] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 01:28:26.745221   10896 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 01:28:26.745221   10896 cache.go:208] Successfully downloaded all kic artifacts
	I1025 01:28:26.745221   10896 start.go:364] acquiring machines lock for NoKubernetes-012456: {Name:mk4f1554d9d0f8abbe533287a8cd7b66b668d166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:28:26.745221   10896 start.go:368] acquired machines lock for "NoKubernetes-012456" in 0s
	I1025 01:28:26.745221   10896 start.go:96] Skipping create...Using existing machine configuration
	I1025 01:28:26.745221   10896 fix.go:55] fixHost starting: 
	I1025 01:28:26.759230   10896 cli_runner.go:164] Run: docker container inspect NoKubernetes-012456 --format={{.State.Status}}
	I1025 01:28:26.980128   10896 fix.go:103] recreateIfNeeded on NoKubernetes-012456: state=Running err=<nil>
	W1025 01:28:26.980199   10896 fix.go:129] unexpected machine state, will restart: <nil>
	I1025 01:28:26.985240   10896 out.go:177] * Updating the running docker "NoKubernetes-012456" container ...
	I1025 01:28:26.988245   10896 machine.go:88] provisioning docker machine ...
	I1025 01:28:26.988245   10896 ubuntu.go:169] provisioning hostname "NoKubernetes-012456"
	I1025 01:28:26.997254   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:27.232896   10896 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:27.233894   10896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64706 <nil> <nil>}
	I1025 01:28:27.233894   10896 main.go:134] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-012456 && echo "NoKubernetes-012456" | sudo tee /etc/hostname
	I1025 01:28:27.400899   10896 main.go:134] libmachine: SSH cmd err, output: <nil>: NoKubernetes-012456
	
	I1025 01:28:27.408891   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:27.612808   10896 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:27.612808   10896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64706 <nil> <nil>}
	I1025 01:28:27.612808   10896 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-012456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-012456/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-012456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 01:28:27.828351   10896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:28:27.828351   10896 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I1025 01:28:27.828410   10896 ubuntu.go:177] setting up certificates
	I1025 01:28:27.828455   10896 provision.go:83] configureAuth start
	I1025 01:28:27.839567   10896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-012456
	I1025 01:28:28.059556   10896 provision.go:138] copyHostCerts
	I1025 01:28:28.059556   10896 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I1025 01:28:28.059556   10896 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I1025 01:28:28.059556   10896 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1025 01:28:28.060553   10896 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I1025 01:28:28.061553   10896 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I1025 01:28:28.061553   10896 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1025 01:28:28.062550   10896 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I1025 01:28:28.062550   10896 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I1025 01:28:28.062550   10896 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1675 bytes)
	I1025 01:28:28.063562   10896 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.NoKubernetes-012456 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube NoKubernetes-012456]
	I1025 01:28:28.159638   10896 provision.go:172] copyRemoteCerts
	I1025 01:28:28.171641   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 01:28:28.179642   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:28.405771   10896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64706 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-012456\id_rsa Username:docker}
	I1025 01:28:28.545303   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 01:28:28.610788   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 01:28:28.671694   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 01:28:28.722691   10896 provision.go:86] duration metric: configureAuth took 893.255ms
	I1025 01:28:28.722691   10896 ubuntu.go:193] setting minikube options for container-runtime
	I1025 01:28:28.722691   10896 config.go:180] Loaded profile config "NoKubernetes-012456": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1025 01:28:28.729689   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:28.966497   10896 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:28.966952   10896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64706 <nil> <nil>}
	I1025 01:28:28.966952   10896 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 01:28:29.204173   10896 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 01:28:29.204173   10896 ubuntu.go:71] root file system type: overlay
	I1025 01:28:29.205223   10896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 01:28:29.216699   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:29.435263   10896 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:29.435373   10896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64706 <nil> <nil>}
	I1025 01:28:29.435750   10896 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 01:28:29.675270   10896 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 01:28:29.687388   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:26.736238    2776 machine.go:88] provisioning docker machine ...
	I1025 01:28:26.736238    2776 ubuntu.go:169] provisioning hostname "stopped-upgrade-012456"
	I1025 01:28:26.743226    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:26.970643    2776 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:26.971067    2776 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64799 <nil> <nil>}
	I1025 01:28:26.971150    2776 main.go:134] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-012456 && echo "stopped-upgrade-012456" | sudo tee /etc/hostname
	I1025 01:28:27.133845    2776 main.go:134] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-012456
	
	I1025 01:28:27.144195    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:27.358898    2776 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:27.359894    2776 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64799 <nil> <nil>}
	I1025 01:28:27.359894    2776 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-012456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-012456/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-012456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 01:28:27.545808    2776 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:28:27.545808    2776 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I1025 01:28:27.545808    2776 ubuntu.go:177] setting up certificates
	I1025 01:28:27.545808    2776 provision.go:83] configureAuth start
	I1025 01:28:27.554807    2776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-012456
	I1025 01:28:27.762419    2776 provision.go:138] copyHostCerts
	I1025 01:28:27.762419    2776 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I1025 01:28:27.762419    2776 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I1025 01:28:27.763410    2776 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1675 bytes)
	I1025 01:28:27.764420    2776 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I1025 01:28:27.764420    2776 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I1025 01:28:27.764420    2776 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1025 01:28:27.765428    2776 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I1025 01:28:27.765428    2776 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I1025 01:28:27.765428    2776 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1025 01:28:27.767266    2776 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-012456 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-012456]
	I1025 01:28:28.111404    2776 provision.go:172] copyRemoteCerts
	I1025 01:28:28.121559    2776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 01:28:28.127621    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:28.341777    2776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64799 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\stopped-upgrade-012456\id_rsa Username:docker}
	I1025 01:28:28.456773    2776 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 01:28:28.515039    2776 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 01:28:28.574020    2776 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 01:28:28.639337    2776 provision.go:86] duration metric: configureAuth took 1.0934048s
	I1025 01:28:28.639381    2776 ubuntu.go:193] setting minikube options for container-runtime
	I1025 01:28:28.640187    2776 config.go:180] Loaded profile config "stopped-upgrade-012456": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 01:28:28.651452    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:28.853921    2776 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:28.854963    2776 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64799 <nil> <nil>}
	I1025 01:28:28.854963    2776 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 01:28:29.073793    2776 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 01:28:29.073793    2776 ubuntu.go:71] root file system type: overlay
	I1025 01:28:29.074554    2776 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 01:28:29.089431    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:29.323410    2776 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:29.324408    2776 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64799 <nil> <nil>}
	I1025 01:28:29.324408    2776 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 01:28:29.547459    2776 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 01:28:29.558610    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:29.784556    2776 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:29.785107    2776 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64799 <nil> <nil>}
	I1025 01:28:29.785178    2776 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 01:28:27.202924   10500 pod_ready.go:102] pod "kube-scheduler-pause-012456" in "kube-system" namespace has status "Ready":"False"
	I1025 01:28:29.205082   10500 pod_ready.go:102] pod "kube-scheduler-pause-012456" in "kube-system" namespace has status "Ready":"False"
	I1025 01:28:30.698458   10500 pod_ready.go:92] pod "kube-scheduler-pause-012456" in "kube-system" namespace has status "Ready":"True"
	I1025 01:28:30.698458   10500 pod_ready.go:81] duration metric: took 8.0523123s waiting for pod "kube-scheduler-pause-012456" in "kube-system" namespace to be "Ready" ...
	I1025 01:28:30.698458   10500 pod_ready.go:38] duration metric: took 16.8664742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:28:30.698458   10500 api_server.go:51] waiting for apiserver process to appear ...
	I1025 01:28:30.708461   10500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:30.733457   10500 api_server.go:71] duration metric: took 17.559009s to wait for apiserver process to appear ...
	I1025 01:28:30.733457   10500 api_server.go:87] waiting for apiserver healthz status ...
	I1025 01:28:30.733457   10500 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64564/healthz ...
	I1025 01:28:30.747463   10500 api_server.go:278] https://127.0.0.1:64564/healthz returned 200:
	ok
	I1025 01:28:30.751455   10500 api_server.go:140] control plane version: v1.25.3
	I1025 01:28:30.751455   10500 api_server.go:130] duration metric: took 17.9979ms to wait for apiserver health ...
	I1025 01:28:30.751455   10500 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 01:28:30.764121   10500 system_pods.go:59] 7 kube-system pods found
	I1025 01:28:30.764216   10500 system_pods.go:61] "coredns-565d847f94-wfbsl" [c9519d5a-7713-409d-b2cf-7bc4d8108ac4] Running
	I1025 01:28:30.764216   10500 system_pods.go:61] "etcd-pause-012456" [902c2bb8-ba69-4ca7-ba30-14966a014c29] Running
	I1025 01:28:30.764216   10500 system_pods.go:61] "kube-apiserver-pause-012456" [61890d72-e6b6-4c43-9d27-8e88ad04f99b] Running
	I1025 01:28:30.764216   10500 system_pods.go:61] "kube-controller-manager-pause-012456" [1a356372-4ee8-4cb1-97d7-113bb2db9870] Running
	I1025 01:28:30.764216   10500 system_pods.go:61] "kube-proxy-w6fq5" [172144c1-0526-4f7d-8f6f-e793d007d436] Running
	I1025 01:28:30.764314   10500 system_pods.go:61] "kube-scheduler-pause-012456" [487162f6-26f0-41bf-8d04-de17e2dbffba] Running
	I1025 01:28:30.764314   10500 system_pods.go:61] "storage-provisioner" [6de82917-024c-4c3a-a639-c4d922fafb55] Running
	I1025 01:28:30.764360   10500 system_pods.go:74] duration metric: took 12.9048ms to wait for pod list to return data ...
	I1025 01:28:30.764360   10500 default_sa.go:34] waiting for default service account to be created ...
	I1025 01:28:30.778974   10500 default_sa.go:45] found service account: "default"
	I1025 01:28:30.779051   10500 default_sa.go:55] duration metric: took 14.6914ms for default service account to be created ...
	I1025 01:28:30.779051   10500 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 01:28:30.801965   10500 system_pods.go:86] 7 kube-system pods found
	I1025 01:28:30.801965   10500 system_pods.go:89] "coredns-565d847f94-wfbsl" [c9519d5a-7713-409d-b2cf-7bc4d8108ac4] Running
	I1025 01:28:30.801965   10500 system_pods.go:89] "etcd-pause-012456" [902c2bb8-ba69-4ca7-ba30-14966a014c29] Running
	I1025 01:28:30.801965   10500 system_pods.go:89] "kube-apiserver-pause-012456" [61890d72-e6b6-4c43-9d27-8e88ad04f99b] Running
	I1025 01:28:30.801965   10500 system_pods.go:89] "kube-controller-manager-pause-012456" [1a356372-4ee8-4cb1-97d7-113bb2db9870] Running
	I1025 01:28:30.801965   10500 system_pods.go:89] "kube-proxy-w6fq5" [172144c1-0526-4f7d-8f6f-e793d007d436] Running
	I1025 01:28:30.801965   10500 system_pods.go:89] "kube-scheduler-pause-012456" [487162f6-26f0-41bf-8d04-de17e2dbffba] Running
	I1025 01:28:30.801965   10500 system_pods.go:89] "storage-provisioner" [6de82917-024c-4c3a-a639-c4d922fafb55] Running
	I1025 01:28:30.801965   10500 system_pods.go:126] duration metric: took 22.9138ms to wait for k8s-apps to be running ...
	I1025 01:28:30.801965   10500 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 01:28:30.815272   10500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:28:30.843645   10500 system_svc.go:56] duration metric: took 41.6791ms WaitForService to wait for kubelet.
	I1025 01:28:30.843645   10500 kubeadm.go:573] duration metric: took 17.669196s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 01:28:30.843645   10500 node_conditions.go:102] verifying NodePressure condition ...
	I1025 01:28:30.851640   10500 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1025 01:28:30.851640   10500 node_conditions.go:123] node cpu capacity is 16
	I1025 01:28:30.851640   10500 node_conditions.go:105] duration metric: took 7.9947ms to run NodePressure ...
	I1025 01:28:30.851640   10500 start.go:217] waiting for startup goroutines ...
	I1025 01:28:30.862644   10500 ssh_runner.go:195] Run: rm -f paused
	I1025 01:28:31.097313   10500 start.go:506] kubectl: 1.18.2, cluster: 1.25.3 (minor skew: 7)
	I1025 01:28:31.099189   10500 out.go:177] 
	W1025 01:28:31.102186   10500 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.25.3.
	I1025 01:28:31.105184   10500 out.go:177]   - Want kubectl v1.25.3? Try 'minikube kubectl -- get pods -A'
	I1025 01:28:31.108224   10500 out.go:177] * Done! kubectl is now configured to use "pause-012456" cluster and "default" namespace by default
	I1025 01:28:29.908842   10896 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:29.909871   10896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64706 <nil> <nil>}
	I1025 01:28:29.909871   10896 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 01:28:30.392908   10896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:28:30.392908   10896 machine.go:91] provisioned docker machine in 3.4046399s
	I1025 01:28:30.392908   10896 start.go:300] post-start starting for "NoKubernetes-012456" (driver="docker")
	I1025 01:28:30.392908   10896 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 01:28:30.407183   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 01:28:30.415184   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:30.614290   10896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64706 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-012456\id_rsa Username:docker}
	I1025 01:28:30.786724   10896 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 01:28:30.804501   10896 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 01:28:30.804501   10896 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 01:28:30.804568   10896 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 01:28:30.804568   10896 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 01:28:30.804568   10896 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I1025 01:28:30.804568   10896 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I1025 01:28:30.805870   10896 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem -> 42002.pem in /etc/ssl/certs
	I1025 01:28:30.823513   10896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 01:28:30.844645   10896 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /etc/ssl/certs/42002.pem (1708 bytes)
	I1025 01:28:30.910786   10896 start.go:303] post-start completed in 517.8747ms
	I1025 01:28:30.927369   10896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:28:30.934367   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:31.160088   10896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64706 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-012456\id_rsa Username:docker}
	I1025 01:28:31.348577   10896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:28:31.362557   10896 fix.go:57] fixHost completed within 4.6173046s
	I1025 01:28:31.362557   10896 start.go:83] releasing machines lock for "NoKubernetes-012456", held for 4.6173046s
	I1025 01:28:31.381555   10896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-012456
	I1025 01:28:31.653067   10896 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1025 01:28:31.663082   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:31.664072   10896 ssh_runner.go:195] Run: systemctl --version
	I1025 01:28:31.676073   10896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-012456
	I1025 01:28:31.891066   10896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64706 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-012456\id_rsa Username:docker}
	I1025 01:28:31.907067   10896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64706 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\NoKubernetes-012456\id_rsa Username:docker}
	I1025 01:28:32.039065   10896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:28:32.379073   10896 out.go:177]   - Kubernetes: Stopping ...
	I1025 01:28:32.590381   10896 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1025 01:28:32.803381   10896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	W1025 01:28:32.878875   10896 kubeadm.go:858] found 9 kube-system containers to stop
	I1025 01:28:32.878875   10896 docker.go:443] Stopping containers: [b4d54f150873 7499efac29fd 1187c3a32c3d 75ca68ba4db7 99bf23936f5b 52559f2b5571 b361acb9b20c f70633516d8d 5a19e76bfb56]
	I1025 01:28:32.885863   10896 ssh_runner.go:195] Run: docker stop b4d54f150873 7499efac29fd 1187c3a32c3d 75ca68ba4db7 99bf23936f5b 52559f2b5571 b361acb9b20c f70633516d8d 5a19e76bfb56
	I1025 01:28:36.392295    2776 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 01:26:57.016081000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-25 01:28:29.531324000 +0000
	@@ -5,9 +5,12 @@
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -23,7 +26,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	
	I1025 01:28:36.392295    2776 machine.go:91] provisioned docker machine in 9.6559905s
	I1025 01:28:36.392295    2776 start.go:300] post-start starting for "stopped-upgrade-012456" (driver="docker")
	I1025 01:28:36.393347    2776 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 01:28:36.406287    2776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 01:28:36.414298    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:36.609367    2776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64799 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\stopped-upgrade-012456\id_rsa Username:docker}
	I1025 01:28:36.747482    2776 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 01:28:36.756502    2776 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 01:28:36.756502    2776 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 01:28:36.756502    2776 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 01:28:36.756502    2776 info.go:137] Remote host: Ubuntu 19.10
	I1025 01:28:36.756502    2776 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I1025 01:28:36.756502    2776 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I1025 01:28:36.757471    2776 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem -> 42002.pem in /etc/ssl/certs
	I1025 01:28:36.775482    2776 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 01:28:36.802247    2776 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /etc/ssl/certs/42002.pem (1708 bytes)
	I1025 01:28:36.849739    2776 start.go:303] post-start completed in 457.4407ms
	I1025 01:28:36.859733    2776 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:28:36.866769    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:37.068066    2776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64799 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\stopped-upgrade-012456\id_rsa Username:docker}
	I1025 01:28:37.223537    2776 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:28:37.241578    2776 fix.go:57] fixHost completed within 10.7595393s
	I1025 01:28:37.241578    2776 start.go:83] releasing machines lock for "stopped-upgrade-012456", held for 10.7597502s
	I1025 01:28:37.251589    2776 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-012456
	I1025 01:28:37.492562    2776 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1025 01:28:37.501545    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:37.501545    2776 ssh_runner.go:195] Run: systemctl --version
	I1025 01:28:37.508548    2776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-012456
	I1025 01:28:37.709399    2776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64799 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\stopped-upgrade-012456\id_rsa Username:docker}
	I1025 01:28:37.728378    2776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64799 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\stopped-upgrade-012456\id_rsa Username:docker}
	I1025 01:28:37.913109    2776 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 01:28:37.949667    2776 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 01:28:37.961394    2776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 01:28:37.990428    2776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 01:28:38.044413    2776 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 01:28:38.346065    2776 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 01:28:38.484895    2776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:28:38.640529    2776 ssh_runner.go:195] Run: sudo systemctl restart docker
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-10-25 01:26:19 UTC, end at Tue 2022-10-25 01:28:42 UTC. --
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.842030500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.882860100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924414100Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924555300Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924577400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924589000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924600500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924614400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.925058600Z" level=info msg="Loading containers: start."
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.309543200Z" level=info msg="ignoring event" container=499586c4bbe94db6c1f33d9ea0b88e0d0d5252e734eebb56d94069f135ef12bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.369135100Z" level=info msg="ignoring event" container=b237899b1595dc276110f650ba4ef3efef1f20a92a5f73f7c4fc1f7a10d3d4a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.467860100Z" level=info msg="ignoring event" container=299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.472496500Z" level=info msg="ignoring event" container=03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.959504100Z" level=info msg="Removing stale sandbox 93fc0fd8d76e3015b44184307586fa7121224ad08e48fa644ceb6a45eff26ac2 (499586c4bbe94db6c1f33d9ea0b88e0d0d5252e734eebb56d94069f135ef12bb)"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.966748000Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d17fe93ce9172c225e26943b192bcb01635182158229455d01b90dfceadcae2f bc7722bdfac74ec72924fbd2fa7f3d7d4191e2349866a5d3bfa4e48051d629d0], retrying...."
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.206549800Z" level=info msg="Removing stale sandbox 0d9b0d7864a536357671b6773c29b7925ba8a19d75a9af250b7d12fe2995750f (b237899b1595dc276110f650ba4ef3efef1f20a92a5f73f7c4fc1f7a10d3d4a2)"
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.216983300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d17fe93ce9172c225e26943b192bcb01635182158229455d01b90dfceadcae2f eb146ee1462a1bf76f97dfbcc5e359088f2d4cf68f71f7af7de5485f8ef3d1b6], retrying...."
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.317279400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.490481800Z" level=info msg="Loading containers: done."
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.584395400Z" level=info msg="Docker daemon" commit=e42327a graphdriver(s)=overlay2 version=20.10.18
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.584548800Z" level=info msg="Daemon has completed initialization"
	Oct 25 01:27:55 pause-012456 systemd[1]: Started Docker Application Container Engine.
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.657182000Z" level=info msg="API listen on [::]:2376"
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.663853400Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 25 01:28:34 pause-012456 dockerd[4030]: time="2022-10-25T01:28:34.013563600Z" level=error msg="Handler for POST /v1.41/containers/9a61857de96a/pause returned error: Cannot pause container 9a61857de96a3e8c49802ee9da9ed1c19d357f354f0e1efa685d91a22624558e: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	c6f5fed8d5a90       6e38f40d628db       19 seconds ago       Running             storage-provisioner       0                   e61a15ac0a82b
	96156d53ee285       6d23ec0e8b87e       25 seconds ago       Running             kube-scheduler            2                   b8028b4301dce
	162f68bfd3ce8       beaaf00edd38a       29 seconds ago       Running             kube-proxy                2                   0e35b01082015
	80a120881748e       5185b96f0becf       43 seconds ago       Running             coredns                   1                   fd7c603c9b74f
	48c9607ccbd6d       0346dbd74bcb9       44 seconds ago       Running             kube-apiserver            1                   1aadf09053d49
	805d7017a1e7b       6039992312758       44 seconds ago       Running             kube-controller-manager   1                   839db953c0135
	9a61857de96a3       a8a176a5d5d69       44 seconds ago       Running             etcd                      1                   21921b116293f
	299c257d1eab9       6d23ec0e8b87e       56 seconds ago       Exited              kube-scheduler            1                   b237899b1595d
	03fb10b262ea6       beaaf00edd38a       56 seconds ago       Exited              kube-proxy                1                   499586c4bbe94
	bf591621d1f2b       5185b96f0becf       About a minute ago   Exited              coredns                   0                   efad283a29afe
	36238dbd7ae7f       6039992312758       About a minute ago   Exited              kube-controller-manager   0                   88cbc65be559d
	0874739bcbb5b       0346dbd74bcb9       About a minute ago   Exited              kube-apiserver            0                   d102dbe30c94c
	7518a38e8cedc       a8a176a5d5d69       About a minute ago   Exited              etcd                      0                   edbd72e6bf321
	
	* 
	* ==> coredns [80a120881748] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [bf591621d1f2] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Oct25 01:01] WSL2: Performing memory compaction.
	[Oct25 01:03] WSL2: Performing memory compaction.
	[Oct25 01:04] WSL2: Performing memory compaction.
	[Oct25 01:05] WSL2: Performing memory compaction.
	[Oct25 01:06] WSL2: Performing memory compaction.
	[Oct25 01:07] WSL2: Performing memory compaction.
	[Oct25 01:08] WSL2: Performing memory compaction.
	[Oct25 01:09] WSL2: Performing memory compaction.
	[Oct25 01:10] WSL2: Performing memory compaction.
	[Oct25 01:11] WSL2: Performing memory compaction.
	[Oct25 01:12] WSL2: Performing memory compaction.
	[Oct25 01:13] WSL2: Performing memory compaction.
	[Oct25 01:14] WSL2: Performing memory compaction.
	[Oct25 01:15] WSL2: Performing memory compaction.
	[Oct25 01:17] WSL2: Performing memory compaction.
	[Oct25 01:18] WSL2: Performing memory compaction.
	[Oct25 01:19] WSL2: Performing memory compaction.
	[Oct25 01:20] WSL2: Performing memory compaction.
	[Oct25 01:21] WSL2: Performing memory compaction.
	[Oct25 01:22] WSL2: Performing memory compaction.
	[Oct25 01:23] WSL2: Performing memory compaction.
	[Oct25 01:24] WSL2: Performing memory compaction.
	[Oct25 01:25] WSL2: Performing memory compaction.
	[Oct25 01:26] process 'docker/tmp/qemu-check146077527/check' started with executable stack
	[Oct25 01:28] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [7518a38e8ced] <==
	* {"level":"warn","ts":"2022-10-25T01:27:22.130Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"692.5441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-012456\" ","response":"range_response_count:1 size:5172"}
	{"level":"info","ts":"2022-10-25T01:27:22.131Z","caller":"traceutil/trace.go:171","msg":"trace[343945099] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-012456; range_end:; response_count:1; response_revision:275; }","duration":"692.6892ms","start":"2022-10-25T01:27:21.438Z","end":"2022-10-25T01:27:22.131Z","steps":["trace[343945099] 'agreement among raft nodes before linearized reading'  (duration: 670.3274ms)","trace[343945099] 'range keys from in-memory index tree'  (duration: 22.1645ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:27:22.131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:27:21.438Z","time spent":"692.8558ms","remote":"127.0.0.1:35546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5195,"request content":"key:\"/registry/pods/kube-system/etcd-pause-012456\" "}
	{"level":"info","ts":"2022-10-25T01:27:28.279Z","caller":"traceutil/trace.go:171","msg":"trace[134672829] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"100.337ms","start":"2022-10-25T01:27:28.179Z","end":"2022-10-25T01:27:28.279Z","steps":["trace[134672829] 'process raft request'  (duration: 100.0326ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:27:28.280Z","caller":"traceutil/trace.go:171","msg":"trace[1278870774] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"101.5496ms","start":"2022-10-25T01:27:28.178Z","end":"2022-10-25T01:27:28.280Z","steps":["trace[1278870774] 'process raft request'  (duration: 88.1302ms)","trace[1278870774] 'compare'  (duration: 11.9333ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:27:28.479Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.9714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2022-10-25T01:27:28.480Z","caller":"traceutil/trace.go:171","msg":"trace[291873918] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:350; }","duration":"102.3084ms","start":"2022-10-25T01:27:28.377Z","end":"2022-10-25T01:27:28.479Z","steps":["trace[291873918] 'agreement among raft nodes before linearized reading'  (duration: 87.5718ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:27:32.874Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"200.5964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-w6fq5\" ","response":"range_response_count:1 size:4417"}
	{"level":"info","ts":"2022-10-25T01:27:32.874Z","caller":"traceutil/trace.go:171","msg":"trace[1876732971] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-w6fq5; range_end:; response_count:1; response_revision:374; }","duration":"200.7091ms","start":"2022-10-25T01:27:32.674Z","end":"2022-10-25T01:27:32.874Z","steps":["trace[1876732971] 'agreement among raft nodes before linearized reading'  (duration: 91.242ms)","trace[1876732971] 'range keys from in-memory index tree'  (duration: 109.314ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:27:32.875Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"109.4861ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638331946385336563 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:373 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-10-25T01:27:32.875Z","caller":"traceutil/trace.go:171","msg":"trace[1457078045] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:386; }","duration":"110.0969ms","start":"2022-10-25T01:27:32.765Z","end":"2022-10-25T01:27:32.875Z","steps":["trace[1457078045] 'read index received'  (duration: 41.578ms)","trace[1457078045] 'applied index is now lower than readState.Index'  (duration: 68.515ms)"],"step_count":2}
	{"level":"info","ts":"2022-10-25T01:27:32.875Z","caller":"traceutil/trace.go:171","msg":"trace[245622856] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"110.3024ms","start":"2022-10-25T01:27:32.765Z","end":"2022-10-25T01:27:32.875Z","steps":["trace[245622856] 'process raft request'  (duration: 109.932ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:27:32.875Z","caller":"traceutil/trace.go:171","msg":"trace[612610884] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"192.047ms","start":"2022-10-25T01:27:32.683Z","end":"2022-10-25T01:27:32.875Z","steps":["trace[612610884] 'process raft request'  (duration: 81.6109ms)","trace[612610884] 'compare'  (duration: 109.1515ms)"],"step_count":2}
	{"level":"info","ts":"2022-10-25T01:27:32.876Z","caller":"traceutil/trace.go:171","msg":"trace[111683511] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"108.7167ms","start":"2022-10-25T01:27:32.768Z","end":"2022-10-25T01:27:32.876Z","steps":["trace[111683511] 'process raft request'  (duration: 107.139ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:27:32.877Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"194.3549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-012456\" ","response":"range_response_count:1 size:4548"}
	{"level":"info","ts":"2022-10-25T01:27:32.877Z","caller":"traceutil/trace.go:171","msg":"trace[99413065] range","detail":"{range_begin:/registry/minions/pause-012456; range_end:; response_count:1; response_revision:377; }","duration":"194.3984ms","start":"2022-10-25T01:27:32.682Z","end":"2022-10-25T01:27:32.877Z","steps":["trace[99413065] 'agreement among raft nodes before linearized reading'  (duration: 194.308ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:27:44.965Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-10-25T01:27:44.965Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-012456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/10/25 01:27:44 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2022/10/25 01:27:44 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/10/25 01:27:45 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-10-25T01:27:45.067Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-10-25T01:27:45.175Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-10-25T01:27:45.178Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-10-25T01:27:45.178Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-012456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [9a61857de96a] <==
	* {"level":"info","ts":"2022-10-25T01:28:11.323Z","caller":"traceutil/trace.go:171","msg":"trace[1596209342] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:64; response_revision:398; }","duration":"2.7312188s","start":"2022-10-25T01:28:08.592Z","end":"2022-10-25T01:28:11.323Z","steps":["trace[1596209342] 'agreement among raft nodes before linearized reading'  (duration: 2.7167943s)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:11.324Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:08.592Z","time spent":"2.7314208s","remote":"127.0.0.1:38156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":64,"response size":57837,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	{"level":"warn","ts":"2022-10-25T01:28:11.325Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.7330405s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2022-10-25T01:28:11.325Z","caller":"traceutil/trace.go:171","msg":"trace[1748078964] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:398; }","duration":"2.7331006s","start":"2022-10-25T01:28:08.592Z","end":"2022-10-25T01:28:11.325Z","steps":["trace[1748078964] 'agreement among raft nodes before linearized reading'  (duration: 2.7169552s)","trace[1748078964] 'range keys from in-memory index tree'  (duration: 16.028ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:11.325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:08.592Z","time spent":"2.733231s","remote":"127.0.0.1:38162","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":465,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2022-10-25T01:28:17.302Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2022-10-25T01:28:17.302Z","caller":"traceutil/trace.go:171","msg":"trace[2132762986] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:452; }","duration":"113.0164ms","start":"2022-10-25T01:28:17.189Z","end":"2022-10-25T01:28:17.302Z","steps":["trace[2132762986] 'agreement among raft nodes before linearized reading'  (duration: 43.1061ms)","trace[2132762986] 'range keys from in-memory index tree'  (duration: 69.3481ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:21.298Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.9595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-012456\" ","response":"range_response_count:1 size:4548"}
	{"level":"info","ts":"2022-10-25T01:28:21.300Z","caller":"traceutil/trace.go:171","msg":"trace[1884648026] range","detail":"{range_begin:/registry/minions/pause-012456; range_end:; response_count:1; response_revision:471; }","duration":"112.0911ms","start":"2022-10-25T01:28:21.187Z","end":"2022-10-25T01:28:21.300Z","steps":["trace[1884648026] 'agreement among raft nodes before linearized reading'  (duration: 78.7687ms)","trace[1884648026] 'range keys from in-memory index tree'  (duration: 32.1535ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.8682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"525.6969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2022-10-25T01:28:22.460Z","caller":"traceutil/trace.go:171","msg":"trace[1167109480] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:485; }","duration":"117.0416ms","start":"2022-10-25T01:28:22.343Z","end":"2022-10-25T01:28:22.460Z","steps":["trace[1167109480] 'range keys from in-memory index tree'  (duration: 116.6822ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:28:22.460Z","caller":"traceutil/trace.go:171","msg":"trace[1903219565] range","detail":"{range_begin:/registry/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0; range_end:; response_count:1; response_revision:485; }","duration":"525.7639ms","start":"2022-10-25T01:28:21.934Z","end":"2022-10-25T01:28:22.460Z","steps":["trace[1903219565] 'range keys from in-memory index tree'  (duration: 525.5791ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:21.934Z","time spent":"525.8321ms","remote":"127.0.0.1:38080","response type":"/etcdserverpb.KV/Range","request count":0,"request size":75,"response count":1,"response size":764,"request content":"key:\"/registry/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0\" "}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"532.6543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-012456\" ","response":"range_response_count:1 size:7337"}
	{"level":"info","ts":"2022-10-25T01:28:22.471Z","caller":"traceutil/trace.go:171","msg":"trace[1803246184] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-012456; range_end:; response_count:1; response_revision:485; }","duration":"544.171ms","start":"2022-10-25T01:28:21.927Z","end":"2022-10-25T01:28:22.471Z","steps":["trace[1803246184] 'range keys from in-memory index tree'  (duration: 532.4257ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:22.471Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:21.927Z","time spent":"544.3598ms","remote":"127.0.0.1:38104","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7360,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-012456\" "}
	{"level":"warn","ts":"2022-10-25T01:28:33.470Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638331946401562000,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-10-25T01:28:34.476Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"909.8162ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638331946401562003 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414959909546786193 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-10-25T01:28:34.477Z","caller":"traceutil/trace.go:171","msg":"trace[896272498] linearizableReadLoop","detail":"{readStateIndex:529; appliedIndex:528; }","duration":"1.5072272s","start":"2022-10-25T01:28:32.969Z","end":"2022-10-25T01:28:34.476Z","steps":["trace[896272498] 'read index received'  (duration: 596.7526ms)","trace[896272498] 'applied index is now lower than readState.Index'  (duration: 910.4695ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:34.477Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.5074838s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2022-10-25T01:28:34.477Z","caller":"traceutil/trace.go:171","msg":"trace[691269448] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:504; }","duration":"1.5076488s","start":"2022-10-25T01:28:32.969Z","end":"2022-10-25T01:28:34.477Z","steps":["trace[691269448] 'agreement among raft nodes before linearized reading'  (duration: 1.5073894s)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:34.477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:32.969Z","time spent":"1.5078266s","remote":"127.0.0.1:38100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":627,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2022-10-25T01:28:34.477Z","caller":"traceutil/trace.go:171","msg":"trace[164310724] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"1.691858s","start":"2022-10-25T01:28:32.785Z","end":"2022-10-25T01:28:34.477Z","steps":["trace[164310724] 'process raft request'  (duration: 781.1647ms)","trace[164310724] 'compare'  (duration: 909.3344ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:34.477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:32.785Z","time spent":"1.692345s","remote":"127.0.0.1:37956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414959909546786193 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >"}
	
	* 
	* ==> kernel <==
	*  01:28:53 up  1:35,  0 users,  load average: 10.48, 6.23, 3.60
	Linux pause-012456 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [0874739bcbb5] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:27:53.456229       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:27:53.506482       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:27:53.510747       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [48c9607ccbd6] <==
	* Trace[693741938]: ---"Listing from storage done" 2737ms (01:28:11.328)
	Trace[693741938]: [2.7385609s] [2.7385609s] END
	I1025 01:28:11.330547       1 trace.go:205] Trace[2130098445]: "Get" url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:def4e624-af31-44e5-a096-9a16f41cdce6,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (25-Oct-2022 01:28:08.589) (total time: 2741ms):
	Trace[2130098445]: ---"About to write a response" 2740ms (01:28:11.330)
	Trace[2130098445]: [2.7410195s] [2.7410195s] END
	I1025 01:28:11.331165       1 trace.go:205] Trace[1016833730]: "Create etcd3" audit-id:265bf050-ac24-4f6b-8faf-ea62e11d7f79,key:/events/kube-system/kube-apiserver-pause-012456.17212b972b04c498,type:*core.Event (25-Oct-2022 01:28:08.869) (total time: 2461ms):
	Trace[1016833730]: ---"TransformToStorage finished" err:<nil> 2387ms (01:28:11.257)
	Trace[1016833730]: [2.4610587s] [2.4610587s] END
	I1025 01:28:11.331448       1 trace.go:205] Trace[358324787]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:265bf050-ac24-4f6b-8faf-ea62e11d7f79,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:28:08.868) (total time: 2462ms):
	Trace[358324787]: ---"Write to database call finished" len:415,err:<nil> 2462ms (01:28:11.331)
	Trace[358324787]: [2.4624534s] [2.4624534s] END
	I1025 01:28:11.338577       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 01:28:16.991022       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1025 01:28:17.128985       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 01:28:17.323954       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 01:28:17.355237       1 controller.go:616] quota admission added evaluator for: endpoints
	I1025 01:28:22.469823       1 trace.go:205] Trace[684228461]: "GuaranteedUpdate etcd3" audit-id:e0d3ad70-6384-4d9f-aae5-4f1e660f6c5a,key:/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0,type:*core.Event (25-Oct-2022 01:28:21.933) (total time: 535ms):
	Trace[684228461]: ---"initial value restored" 532ms (01:28:22.465)
	Trace[684228461]: [535.9001ms] [535.9001ms] END
	I1025 01:28:22.470335       1 trace.go:205] Trace[192671513]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-scheduler-pause-012456.17212b977baa85e0,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:e0d3ad70-6384-4d9f-aae5-4f1e660f6c5a,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:28:21.933) (total time: 536ms):
	Trace[192671513]: ---"About to apply patch" 532ms (01:28:22.465)
	Trace[192671513]: [536.5979ms] [536.5979ms] END
	I1025 01:28:22.474441       1 trace.go:205] Trace[315186235]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-012456,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:bbd6c6d1-535d-4581-bb49-2854c3ce54a7,client:192.168.76.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 01:28:21.926) (total time: 547ms):
	Trace[315186235]: ---"About to write a response" 547ms (01:28:22.473)
	Trace[315186235]: [547.7475ms] [547.7475ms] END
	
	* 
	* ==> kube-controller-manager [36238dbd7ae7] <==
	* I1025 01:27:26.967862       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1025 01:27:26.968101       1 event.go:294] "Event occurred" object="pause-012456" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-012456 event: Registered Node pause-012456 in Controller"
	I1025 01:27:26.968425       1 shared_informer.go:262] Caches are synced for HPA
	I1025 01:27:26.969777       1 shared_informer.go:262] Caches are synced for job
	I1025 01:27:26.969775       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1025 01:27:26.970144       1 taint_manager.go:209] "Sending events to api server"
	I1025 01:27:26.975899       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 01:27:26.978414       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1025 01:27:26.986851       1 shared_informer.go:262] Caches are synced for expand
	I1025 01:27:27.066267       1 shared_informer.go:262] Caches are synced for disruption
	I1025 01:27:27.067132       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 01:27:27.070108       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:27:27.070307       1 shared_informer.go:262] Caches are synced for ephemeral
	I1025 01:27:27.070777       1 shared_informer.go:262] Caches are synced for PVC protection
	I1025 01:27:27.077437       1 shared_informer.go:262] Caches are synced for stateful set
	I1025 01:27:27.080327       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:27:27.465199       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:27:27.465232       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:27:27.479920       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:27:27.588008       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I1025 01:27:27.685157       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w6fq5"
	I1025 01:27:27.970000       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-9lpwx"
	I1025 01:27:28.006927       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-wfbsl"
	I1025 01:27:28.569616       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I1025 01:27:28.635726       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-9lpwx"
	
	* 
	* ==> kube-controller-manager [805d7017a1e7] <==
	* I1025 01:28:23.968053       1 shared_informer.go:262] Caches are synced for PVC protection
	I1025 01:28:23.968106       1 shared_informer.go:262] Caches are synced for service account
	I1025 01:28:23.968063       1 shared_informer.go:262] Caches are synced for node
	I1025 01:28:23.968243       1 range_allocator.go:166] Starting range CIDR allocator
	I1025 01:28:23.968285       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1025 01:28:23.968374       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1025 01:28:23.968100       1 shared_informer.go:262] Caches are synced for crt configmap
	I1025 01:28:23.969046       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1025 01:28:23.971012       1 shared_informer.go:262] Caches are synced for taint
	I1025 01:28:23.971168       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 01:28:23.971238       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1025 01:28:23.971406       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1025 01:28:23.971580       1 taint_manager.go:209] "Sending events to api server"
	W1025 01:28:23.971417       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-012456. Assuming now as a timestamp.
	I1025 01:28:23.971731       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1025 01:28:23.971920       1 event.go:294] "Event occurred" object="pause-012456" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-012456 event: Registered Node pause-012456 in Controller"
	I1025 01:28:24.073082       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1025 01:28:24.075551       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1025 01:28:24.079454       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1025 01:28:24.083806       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:28:24.095484       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:28:24.166774       1 shared_informer.go:262] Caches are synced for endpoint
	I1025 01:28:24.420269       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:28:24.420422       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:28:24.476766       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [03fb10b262ea] <==
	* E1025 01:27:47.306686       1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I1025 01:27:47.370437       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:27:47.377724       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:27:47.381433       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:27:47.385735       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:27:47.390120       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1025 01:27:47.394052       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-012456": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:48.576256       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-012456": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:50.857331       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-012456": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [162f68bfd3ce] <==
	* I1025 01:28:14.192428       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:28:14.196016       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:28:14.199437       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:28:14.266366       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:28:14.274994       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1025 01:28:14.372349       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I1025 01:28:14.372522       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I1025 01:28:14.372785       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 01:28:14.574348       1 server_others.go:206] "Using iptables Proxier"
	I1025 01:28:14.575092       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1025 01:28:14.575651       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1025 01:28:14.575852       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1025 01:28:14.576306       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:28:14.576658       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:28:14.577551       1 server.go:661] "Version info" version="v1.25.3"
	I1025 01:28:14.577578       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:28:14.578833       1 config.go:444] "Starting node config controller"
	I1025 01:28:14.578849       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 01:28:14.579567       1 config.go:317] "Starting service config controller"
	I1025 01:28:14.579582       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 01:28:14.579620       1 config.go:226] "Starting endpoint slice config controller"
	I1025 01:28:14.579630       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 01:28:14.679598       1 shared_informer.go:262] Caches are synced for node config
	I1025 01:28:14.679794       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 01:28:14.680062       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [299c257d1eab] <==
	* W1025 01:27:51.848933       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:51.849066       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:51.871846       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:51.871907       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:51.916932       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:51.917038       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.001559       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.001754       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.353850       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.353984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.501952       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.502056       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.547576       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.547737       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.554071       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.554238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.928112       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.928247       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.960502       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.960638       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 01:27:54.257843       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1025 01:27:54.258607       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1025 01:27:54.258689       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:27:54.259119       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1025 01:27:54.259131       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [96156d53ee28] <==
	* I1025 01:28:19.358101       1 serving.go:348] Generated self-signed cert in-memory
	I1025 01:28:19.930973       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1025 01:28:19.931129       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:28:21.177918       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 01:28:21.177918       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 01:28:21.178060       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 01:28:21.178202       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 01:28:21.178209       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 01:28:21.178234       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:28:21.178346       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 01:28:21.178367       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 01:28:21.278237       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I1025 01:28:21.278300       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:28:21.278484       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-10-25 01:26:19 UTC, end at Tue 2022-10-25 01:28:54 UTC. --
	Oct 25 01:27:59 pause-012456 kubelet[2199]: I1025 01:27:59.668334    2199 status_manager.go:667] "Failed to get status for pod" podUID=172144c1-0526-4f7d-8f6f-e793d007d436 pod="kube-system/kube-proxy-w6fq5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w6fq5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 25 01:28:01 pause-012456 kubelet[2199]: I1025 01:28:01.204122    2199 scope.go:115] "RemoveContainer" containerID="03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c"
	Oct 25 01:28:01 pause-012456 kubelet[2199]: E1025 01:28:01.204734    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-proxy pod=kube-proxy-w6fq5_kube-system(172144c1-0526-4f7d-8f6f-e793d007d436)\"" pod="kube-system/kube-proxy-w6fq5" podUID=172144c1-0526-4f7d-8f6f-e793d007d436
	Oct 25 01:28:01 pause-012456 kubelet[2199]: I1025 01:28:01.296320    2199 scope.go:115] "RemoveContainer" containerID="299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd"
	Oct 25 01:28:01 pause-012456 kubelet[2199]: E1025 01:28:01.297047    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-pause-012456_kube-system(0cefbf30c3d96d31f12e31badaea1ba3)\"" pod="kube-system/kube-scheduler-pause-012456" podUID=0cefbf30c3d96d31f12e31badaea1ba3
	Oct 25 01:28:02 pause-012456 kubelet[2199]: I1025 01:28:02.371218    2199 scope.go:115] "RemoveContainer" containerID="03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c"
	Oct 25 01:28:02 pause-012456 kubelet[2199]: E1025 01:28:02.372433    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-proxy pod=kube-proxy-w6fq5_kube-system(172144c1-0526-4f7d-8f6f-e793d007d436)\"" pod="kube-system/kube-proxy-w6fq5" podUID=172144c1-0526-4f7d-8f6f-e793d007d436
	Oct 25 01:28:02 pause-012456 kubelet[2199]: I1025 01:28:02.372437    2199 scope.go:115] "RemoveContainer" containerID="299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd"
	Oct 25 01:28:02 pause-012456 kubelet[2199]: E1025 01:28:02.373020    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-pause-012456_kube-system(0cefbf30c3d96d31f12e31badaea1ba3)\"" pod="kube-system/kube-scheduler-pause-012456" podUID=0cefbf30c3d96d31f12e31badaea1ba3
	Oct 25 01:28:07 pause-012456 kubelet[2199]: E1025 01:28:07.768222    2199 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 25 01:28:07 pause-012456 kubelet[2199]: E1025 01:28:07.770833    2199 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 25 01:28:07 pause-012456 kubelet[2199]: E1025 01:28:07.770894    2199 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 25 01:28:13 pause-012456 kubelet[2199]: I1025 01:28:13.189525    2199 scope.go:115] "RemoveContainer" containerID="03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c"
	Oct 25 01:28:17 pause-012456 kubelet[2199]: I1025 01:28:17.187945    2199 scope.go:115] "RemoveContainer" containerID="299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.166696    2199 request.go:682] Waited for 1.4251596s due to client-side throttling, not priority and fairness, request: PATCH:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-proxy-w6fq5.17212b9a0237f8ac
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.369846    2199 topology_manager.go:205] "Topology Admit Handler"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: E1025 01:28:21.373477    2199 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="9a7bc2c8-b2ee-4089-9d34-a5fdf7b07e9d" containerName="coredns"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.373850    2199 memory_manager.go:345] "RemoveStaleState removing state" podUID="9a7bc2c8-b2ee-4089-9d34-a5fdf7b07e9d" containerName="coredns"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.473071    2199 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6de82917-024c-4c3a-a639-c4d922fafb55-tmp\") pod \"storage-provisioner\" (UID: \"6de82917-024c-4c3a-a639-c4d922fafb55\") " pod="kube-system/storage-provisioner"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.473515    2199 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xg22\" (UniqueName: \"kubernetes.io/projected/6de82917-024c-4c3a-a639-c4d922fafb55-kube-api-access-7xg22\") pod \"storage-provisioner\" (UID: \"6de82917-024c-4c3a-a639-c4d922fafb55\") " pod="kube-system/storage-provisioner"
	Oct 25 01:28:23 pause-012456 kubelet[2199]: I1025 01:28:23.567195    2199 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e61a15ac0a82bb5ae70c351a3a40ad6577012b9f29aa18f7153ca875c976e001"
	Oct 25 01:28:33 pause-012456 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Oct 25 01:28:33 pause-012456 kubelet[2199]: I1025 01:28:33.288000    2199 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 01:28:33 pause-012456 systemd[1]: kubelet.service: Succeeded.
	Oct 25 01:28:33 pause-012456 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [c6f5fed8d5a9] <==
	* I1025 01:28:24.789580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 01:28:24.821298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 01:28:24.821466       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 01:28:24.874379       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 01:28:24.875026       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-012456_973ae756-136d-4b17-9d9a-e819a5044960!
	I1025 01:28:24.874724       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f8e7c80-e378-4f67-8f08-76b8231e717b", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-012456_973ae756-136d-4b17-9d9a-e819a5044960 became leader
	I1025 01:28:24.976259       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-012456_973ae756-136d-4b17-9d9a-e819a5044960!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 01:28:53.330854    4228 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-012456 -n pause-012456
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-012456 -n pause-012456: exit status 2 (1.6333228s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-012456" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-012456
helpers_test.go:235: (dbg) docker inspect pause-012456:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3",
	        "Created": "2022-10-25T01:26:16.2008334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-25T01:26:18.3793296Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/hosts",
	        "LogPath": "/var/lib/docker/containers/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3/20521f22f32c2730e7a4d10b52805adba1224c6db842b442148c41141f6c10d3-json.log",
	        "Name": "/pause-012456",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-012456:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-012456",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48-init/diff:/var/lib/docker/overlay2/1d72d69c076943d6cd413bc50b6a474779145c6396136b4aef1829c16f4a6d69/diff:/var/lib/docker/overlay2/2712457ef6b3ec08714d64e5261a9b327c3f8db2156d7a1b493340af804c46f1/diff:/var/lib/docker/overlay2/956ad2e584ed04429b79ab0ee4bdc8977af3fcfbab3cc0ed570922cc07ffd0a6/diff:/var/lib/docker/overlay2/c4f80c5076f71429b4266dc613d1850e7295faded99f05e04fcb13d2cb4d3157/diff:/var/lib/docker/overlay2/18b12a09b44604345877d4490348801b993263f747090a3a48eac835ac323d86/diff:/var/lib/docker/overlay2/6ce1e052ac8d5221cb1978a93a4c4d18c74da80e998b6e54246cdc95997a769f/diff:/var/lib/docker/overlay2/9e6e7c177b550c9c4fc4af8222ccc9bfe5b01fa177f08388c541fde750e4df80/diff:/var/lib/docker/overlay2/c56ad1fbd8fd09ba635cb91b82c303fab8be925f82edac48c47ed2b99f054b36/diff:/var/lib/docker/overlay2/b4a229acad56b83bd9d04813f3f4cf0c8c562169b12ef1e88243f4588d0b28f9/diff:/var/lib/docker/overlay2/56f30b
af9b74a7e6afda16e0f90a1863a3db06b5fec5cf06828152edc0faa420/diff:/var/lib/docker/overlay2/4275e6a6be34231198b756601a3b51a1d8446e8830b1c4037b20370047b88b9e/diff:/var/lib/docker/overlay2/0a9f47913b546daa2d558a978beaaa9e1e7e73a568fa1ee9d198e1e2154d3f75/diff:/var/lib/docker/overlay2/f1895cfb690eaa9bf966dd3f040878344a80c0dc3606dd2d5e67d9495cfa3ff8/diff:/var/lib/docker/overlay2/84335bbaf957cb1942f1d774b817e78297dbe5ffeb7e2e406e7492cf5a720c7e/diff:/var/lib/docker/overlay2/d9a26e65c06347ae6f8f306617639febfee5427dffa6d33a6acb3abfc22092fb/diff:/var/lib/docker/overlay2/a6893072e83e913a455da1f55020a69e4cd75c9ca7b9893e47d184eaf0da806d/diff:/var/lib/docker/overlay2/2d4c8dbcc1a6e63159280d831a4e448df4587dae065b53837a0e735e579361c4/diff:/var/lib/docker/overlay2/6fd2d854ad2aede74411487bcfe2f1fa3c4e1bbfad739455a690a5801c7c9d18/diff:/var/lib/docker/overlay2/d8435d49436e1e6d94054688732a28cdf047031ca600d938ab879a3f72791749/diff:/var/lib/docker/overlay2/618bd9835cc6596945db86c2cd23a6ea6c60992ff42cb8ba7a13f96776d79bb3/diff:/var/lib/d
ocker/overlay2/8e9af4c331a1374dad5f203889fa4953cd3111c705011d2f885ce8a3a04daf2c/diff:/var/lib/docker/overlay2/b8b4d702f888aa572be928e4e449cfaed5da2a045d94f145c0d48b2f838a2dc5/diff:/var/lib/docker/overlay2/6b708706c388c674df30fea4b16deb3b96447089d2a1cd5341ef199bd5dc3c4e/diff:/var/lib/docker/overlay2/f3bab3644fefb2215fd7b4b857958be30f575fd080ec37030b8b970e46155cdc/diff:/var/lib/docker/overlay2/809d38d9cc75c39f4eab1c2c64257e010b66f6dd17717a251371701f51b07237/diff:/var/lib/docker/overlay2/b2fc12e35954dea9baf6e418bbc1b629a71863e855e4373e8d665590cd7cbc54/diff:/var/lib/docker/overlay2/34dcaea23605015741cd4c620ce445c935ca6a08892a5aa15165a8422bb013c0/diff:/var/lib/docker/overlay2/4c362976bdb9f18c68d5c294dc08d7939899992ed5f8bb13ab34f58ec03fcdd6/diff:/var/lib/docker/overlay2/316879c125d7c6ab5ddb970715d730f6a9ea41f2b58da1ac9379b1d528a25970/diff:/var/lib/docker/overlay2/241a6ea1a0e862f8ac9d51e14f03999907acd9030349143120fad52b3c1c2b97/diff:/var/lib/docker/overlay2/c64f861002875793ea9a7d58a0e0b96ad95c3c7fb2874b758d4fb1bc26c
34587/diff:/var/lib/docker/overlay2/9b91106560e299e000b1229f3c2774c8ff0b881dbb4a27b80b89d0287f2f581d/diff:/var/lib/docker/overlay2/48a0a6d3a2a4100e68d167121a7df5a2244821b71406e29d5cc8220307ed9847/diff:/var/lib/docker/overlay2/1f280e54c1637034501f87fed8ca123799984880082b190271d5fa183974cb70/diff:/var/lib/docker/overlay2/8b8d91bd6daf07b06612bec716b08ed3d8032a4caa291548eead78a2b2c7e037/diff:/var/lib/docker/overlay2/b3ab8284e9708da3d4a94f3bd549609f23fcc286b4c1522cdb244344a4957bba/diff:/var/lib/docker/overlay2/7cc92644ec11a70cec25faf398c533eaa555c3a0ab3e783bf6f0cb342f18de20/diff:/var/lib/docker/overlay2/7f44e48c3f9293e16b6fedacc411012e83674000293a110908fcbe7b8aa0f56c/diff:/var/lib/docker/overlay2/7ded7fd7dc10119d3c74efa565ab8580571328086d82d5e795e7adcd3276e653/diff:/var/lib/docker/overlay2/b4654f15c85f235a8a9d5b03067d9aacd8d02569b48170551e8cc1fb340698ad/diff:/var/lib/docker/overlay2/901a06d4c922f4dcb994eec1c950879f560844312e104093523c1f1637594c70/diff:/var/lib/docker/overlay2/0fdbbeb11fdbed96bd80868c62d4c13bf887e7
83043225667d2bde711d03b757/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3aaedc988fe43d0d9b780713b2dd8d700a4e40d1e53cea399b8a69fd928ffb48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-012456",
	                "Source": "/var/lib/docker/volumes/pause-012456/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-012456",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-012456",
	                "name.minikube.sigs.k8s.io": "pause-012456",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a6582eedaff24e13771a62e8953d6e7a2f955a07f013fe19da233f0adca1261",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64560"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64561"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64563"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64564"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6a6582eedaff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-012456": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "20521f22f32c",
	                        "pause-012456"
	                    ],
	                    "NetworkID": "215bbf25ac33d2c24e30f8c0b7898eb5d9b9ddd1cf9424c60f9de63e2a4ebba4",
	                    "EndpointID": "880803c180d62462f225f0096de73b6edf45b16261ead8e066449708b0829800",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-012456 -n pause-012456
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-012456 -n pause-012456: exit status 2 (1.5300851s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-012456 logs -n 25
E1025 01:29:11.590061    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-012456 logs -n 25: (13.5955412s)
helpers_test.go:252: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p multinode-010431            | multinode-010431            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:16 GMT | 25 Oct 22 01:17 GMT |
	| start   | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:17 GMT | 25 Oct 22 01:19 GMT |
	|         | --memory=2200                  |                             |                   |         |                     |                     |
	|         | --alsologtostderr              |                             |                   |         |                     |                     |
	|         | --wait=true --preload=false    |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                             |                   |         |                     |                     |
	| ssh     | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:19 GMT | 25 Oct 22 01:19 GMT |
	|         | -- docker pull                 |                             |                   |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox    |                             |                   |         |                     |                     |
	| start   | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:19 GMT | 25 Oct 22 01:21 GMT |
	|         | --memory=2200                  |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.6   |                             |                   |         |                     |                     |
	| ssh     | -p test-preload-011708 --      | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:21 GMT | 25 Oct 22 01:21 GMT |
	|         | docker images                  |                             |                   |         |                     |                     |
	| delete  | -p test-preload-011708         | test-preload-011708         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:21 GMT | 25 Oct 22 01:21 GMT |
	| start   | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:21 GMT | 25 Oct 22 01:22 GMT |
	|         | --memory=2048 --driver=docker  |                             |                   |         |                     |                     |
	| stop    | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:22 GMT | 25 Oct 22 01:22 GMT |
	|         | --schedule 5m                  |                             |                   |         |                     |                     |
	| ssh     | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:22 GMT | 25 Oct 22 01:22 GMT |
	|         | -- sudo systemctl show         |                             |                   |         |                     |                     |
	|         | minikube-scheduled-stop        |                             |                   |         |                     |                     |
	|         | --no-page                      |                             |                   |         |                     |                     |
	| stop    | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:22 GMT | 25 Oct 22 01:22 GMT |
	|         | --schedule 5s                  |                             |                   |         |                     |                     |
	| delete  | -p scheduled-stop-012128       | scheduled-stop-012128       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:23 GMT | 25 Oct 22 01:24 GMT |
	| start   | -p insufficient-storage-012403 | insufficient-storage-012403 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT |                     |
	|         | --memory=2048 --output=json    |                             |                   |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |                   |         |                     |                     |
	| delete  | -p insufficient-storage-012403 | insufficient-storage-012403 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:24 GMT |
	| start   | -p pause-012456 --memory=2048  | pause-012456                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:27 GMT |
	|         | --install-addons=false         |                             |                   |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |                   |         |                     |                     |
	| start   | -p offline-docker-012456       | offline-docker-012456       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:27 GMT |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --memory=2048 --wait=true      |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT |                     |
	|         | --no-kubernetes                |                             |                   |         |                     |                     |
	|         | --kubernetes-version=1.20      |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:24 GMT | 25 Oct 22 01:28 GMT |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p pause-012456                | pause-012456                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:27 GMT | 25 Oct 22 01:28 GMT |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| delete  | -p offline-docker-012456       | offline-docker-012456       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:27 GMT | 25 Oct 22 01:28 GMT |
	| start   | -p force-systemd-flag-012812   | force-systemd-flag-012812   | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --memory=2048 --force-systemd  |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p stopped-upgrade-012456      | stopped-upgrade-012456      | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --memory=2200                  |                             |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| start   | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT | 25 Oct 22 01:28 GMT |
	|         | --no-kubernetes                |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	| pause   | -p pause-012456                | pause-012456                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --alsologtostderr -v=5         |                             |                   |         |                     |                     |
	| delete  | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT | 25 Oct 22 01:28 GMT |
	| start   | -p NoKubernetes-012456         | NoKubernetes-012456         | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:28 GMT |                     |
	|         | --no-kubernetes                |                             |                   |         |                     |                     |
	|         | --driver=docker                |                             |                   |         |                     |                     |
	|---------|--------------------------------|-----------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 01:28:51
	Running on machine: minikube8
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 01:28:51.846013    3628 out.go:296] Setting OutFile to fd 1608 ...
	I1025 01:28:51.913426    3628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:28:51.913426    3628 out.go:309] Setting ErrFile to fd 1716...
	I1025 01:28:51.913426    3628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:28:51.936431    3628 out.go:303] Setting JSON to false
	I1025 01:28:51.947436    3628 start.go:116] hostinfo: {"hostname":"minikube8","uptime":10976,"bootTime":1666650355,"procs":160,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 01:28:51.947573    3628 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 01:28:51.957568    3628 out.go:177] * [NoKubernetes-012456] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 01:28:51.961578    3628 notify.go:220] Checking for updates...
	I1025 01:28:51.963954    3628 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:28:51.967083    3628 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 01:28:51.972312    3628 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 01:28:51.975761    3628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 01:28:47.825501   10888 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\force-systemd-flag-012812\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 01:28:47.832486   10888 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\force-systemd-flag-012812\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 01:28:48.205477   10888 cli_runner.go:164] Run: docker container inspect force-systemd-flag-012812 --format={{.State.Status}}
	I1025 01:28:48.467643   10888 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 01:28:48.467643   10888 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-012812 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 01:28:48.811913   10888 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\force-systemd-flag-012812\id_rsa...
	I1025 01:28:49.368927   10888 cli_runner.go:164] Run: docker container inspect force-systemd-flag-012812 --format={{.State.Status}}
	I1025 01:28:49.589604   10888 machine.go:88] provisioning docker machine ...
	I1025 01:28:49.589604   10888 ubuntu.go:169] provisioning hostname "force-systemd-flag-012812"
	I1025 01:28:49.600612   10888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-012812
	I1025 01:28:49.829414   10888 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:49.829414   10888 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64852 <nil> <nil>}
	I1025 01:28:49.829414   10888 main.go:134] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-012812 && echo "force-systemd-flag-012812" | sudo tee /etc/hostname
	I1025 01:28:50.054791   10888 main.go:134] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-012812
	
	I1025 01:28:50.064839   10888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-012812
	I1025 01:28:50.312688   10888 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:50.313692   10888 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64852 <nil> <nil>}
	I1025 01:28:50.313692   10888 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-012812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-012812/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-012812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 01:28:50.539614   10888 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:28:50.539670   10888 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I1025 01:28:50.539726   10888 ubuntu.go:177] setting up certificates
	I1025 01:28:50.539783   10888 provision.go:83] configureAuth start
	I1025 01:28:50.549727   10888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-012812
	I1025 01:28:50.774861   10888 provision.go:138] copyHostCerts
	I1025 01:28:50.774861   10888 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem
	I1025 01:28:50.774861   10888 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I1025 01:28:50.774861   10888 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I1025 01:28:50.774861   10888 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1675 bytes)
	I1025 01:28:50.775865   10888 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem
	I1025 01:28:50.775865   10888 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I1025 01:28:50.776881   10888 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I1025 01:28:50.776881   10888 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1025 01:28:50.777875   10888 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem
	I1025 01:28:50.777875   10888 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I1025 01:28:50.777875   10888 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I1025 01:28:50.777875   10888 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1025 01:28:50.778868   10888 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-012812 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-012812]
	I1025 01:28:51.202233   10888 provision.go:172] copyRemoteCerts
	I1025 01:28:51.216882   10888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 01:28:51.224750   10888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-012812
	I1025 01:28:51.443662   10888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64852 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\force-systemd-flag-012812\id_rsa Username:docker}
	I1025 01:28:51.587962   10888 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1025 01:28:51.587962   10888 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 01:28:51.637967   10888 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1025 01:28:51.637967   10888 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 01:28:51.688601   10888 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1025 01:28:51.689290   10888 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 01:28:51.748430   10888 provision.go:86] duration metric: configureAuth took 1.2085675s
	I1025 01:28:51.748482   10888 ubuntu.go:193] setting minikube options for container-runtime
	I1025 01:28:51.749057   10888 config.go:180] Loaded profile config "force-systemd-flag-012812": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:28:51.760307   10888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-012812
	I1025 01:28:52.019982   10888 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:52.019982   10888 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64852 <nil> <nil>}
	I1025 01:28:52.020979   10888 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 01:28:52.241153   10888 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 01:28:52.241153   10888 ubuntu.go:71] root file system type: overlay
	I1025 01:28:52.241153   10888 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 01:28:52.248140   10888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-012812
	I1025 01:28:52.503891   10888 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:52.503891   10888 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64852 <nil> <nil>}
	I1025 01:28:52.503891   10888 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 01:28:51.979990    3628 config.go:180] Loaded profile config "force-systemd-flag-012812": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:28:51.980587    3628 config.go:180] Loaded profile config "pause-012456": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:28:51.980631    3628 config.go:180] Loaded profile config "stopped-upgrade-012456": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1025 01:28:51.980631    3628 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:51.980631    3628 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 01:28:52.284361    3628 docker.go:137] docker version: linux-20.10.17
	I1025 01:28:52.295973    3628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:28:52.905316    3628 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:61 SystemTime:2022-10-25 01:28:52.5008203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:28:52.908626    3628 out.go:177] * Using the docker driver based on user configuration
	I1025 01:28:52.911704    3628 start.go:282] selected driver: docker
	I1025 01:28:52.911704    3628 start.go:808] validating driver "docker" against <nil>
	I1025 01:28:52.911704    3628 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 01:28:52.935790    3628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:28:53.548609    3628 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:61 SystemTime:2022-10-25 01:28:53.1228267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:28:53.548989    3628 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:53.549040    3628 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:53.549107    3628 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 01:28:53.594583    3628 start_flags.go:384] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I1025 01:28:53.594583    3628 start_flags.go:867] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 01:28:53.597688    3628 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 01:28:53.599587    3628 cni.go:95] Creating CNI manager for ""
	I1025 01:28:53.599587    3628 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 01:28:53.599587    3628 start_flags.go:317] config:
	{Name:NoKubernetes-012456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-012456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:28:53.600625    3628 start.go:1682] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1025 01:28:53.603094    3628 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-012456
	I1025 01:28:53.609814    3628 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 01:28:53.613434    3628 out.go:177] * Pulling base image ...
	I1025 01:28:53.615882    3628 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I1025 01:28:53.615882    3628 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	W1025 01:28:53.661139    3628 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1025 01:28:53.661416    3628 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\NoKubernetes-012456\config.json ...
	I1025 01:28:53.661777    3628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\NoKubernetes-012456\config.json: {Name:mk8263afd1d5403b2344de8a88fd93f22600e7a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:28:53.876067    3628 image.go:86] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 01:28:53.876067    3628 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 01:28:53.876067    3628 cache.go:208] Successfully downloaded all kic artifacts
	I1025 01:28:53.876067    3628 start.go:364] acquiring machines lock for NoKubernetes-012456: {Name:mk4f1554d9d0f8abbe533287a8cd7b66b668d166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:28:53.876067    3628 start.go:368] acquired machines lock for "NoKubernetes-012456" in 0s
	I1025 01:28:53.876067    3628 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-012456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-012456 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:28:53.876067    3628 start.go:125] createHost starting for "" (driver="docker")
	I1025 01:28:51.052523    2776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:28:51.529729    2776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:28:51.790538    2776 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:28:52.107711    2776 api_server.go:51] waiting for apiserver process to appear ...
	I1025 01:28:52.120935    2776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:52.700990    2776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:53.204352    2776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:53.707815    2776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:54.213367    2776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:54.706726    2776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:55.205066    2776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:28:53.884066    3628 out.go:204] * Creating docker container (CPUs=2, Memory=16300MB) ...
	I1025 01:28:53.885077    3628 start.go:159] libmachine.API.Create for "NoKubernetes-012456" (driver="docker")
	I1025 01:28:53.885077    3628 client.go:168] LocalClient.Create starting
	I1025 01:28:53.885077    3628 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I1025 01:28:53.885077    3628 main.go:134] libmachine: Decoding PEM data...
	I1025 01:28:53.885077    3628 main.go:134] libmachine: Parsing certificate...
	I1025 01:28:53.885077    3628 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I1025 01:28:53.885077    3628 main.go:134] libmachine: Decoding PEM data...
	I1025 01:28:53.885077    3628 main.go:134] libmachine: Parsing certificate...
	I1025 01:28:53.894070    3628 cli_runner.go:164] Run: docker network inspect NoKubernetes-012456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 01:28:54.153180    3628 cli_runner.go:211] docker network inspect NoKubernetes-012456 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 01:28:54.161175    3628 network_create.go:272] running [docker network inspect NoKubernetes-012456] to gather additional debugging logs...
	I1025 01:28:54.161175    3628 cli_runner.go:164] Run: docker network inspect NoKubernetes-012456
	W1025 01:28:54.386998    3628 cli_runner.go:211] docker network inspect NoKubernetes-012456 returned with exit code 1
	I1025 01:28:54.387066    3628 network_create.go:275] error running [docker network inspect NoKubernetes-012456]: docker network inspect NoKubernetes-012456: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-012456
	I1025 01:28:54.387066    3628 network_create.go:277] output of [docker network inspect NoKubernetes-012456]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-012456
	
	** /stderr **
	I1025 01:28:54.401304    3628 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 01:28:54.657437    3628 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a5e290] misses:0}
	I1025 01:28:54.657437    3628 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:54.657437    3628 network_create.go:115] attempt to create docker network NoKubernetes-012456 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 01:28:54.665936    3628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456
	W1025 01:28:54.897820    3628 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456 returned with exit code 1
	W1025 01:28:54.897820    3628 network_create.go:107] failed to create docker network NoKubernetes-012456 192.168.49.0/24, will retry: subnet is taken
	I1025 01:28:54.917813    3628 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:false}} dirty:map[] misses:0}
	I1025 01:28:54.917813    3628 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:54.940234    3628 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:true}} dirty:map[192.168.49.0:0xc000a5e290 192.168.58.0:0xc000a5e378] misses:0}
	I1025 01:28:54.940234    3628 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:54.940234    3628 network_create.go:115] attempt to create docker network NoKubernetes-012456 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 01:28:54.949513    3628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456
	W1025 01:28:55.146478    3628 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456 returned with exit code 1
	W1025 01:28:55.146478    3628 network_create.go:107] failed to create docker network NoKubernetes-012456 192.168.58.0/24, will retry: subnet is taken
	I1025 01:28:55.172456    3628 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:true}} dirty:map[192.168.49.0:0xc000a5e290 192.168.58.0:0xc000a5e378] misses:1}
	I1025 01:28:55.172456    3628 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:55.198939    3628 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:true}} dirty:map[192.168.49.0:0xc000a5e290 192.168.58.0:0xc000a5e378 192.168.67.0:0xc000a5e470] misses:1}
	I1025 01:28:55.199001    3628 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:55.199001    3628 network_create.go:115] attempt to create docker network NoKubernetes-012456 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 01:28:55.207152    3628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456
	W1025 01:28:55.427547    3628 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456 returned with exit code 1
	W1025 01:28:55.427547    3628 network_create.go:107] failed to create docker network NoKubernetes-012456 192.168.67.0/24, will retry: subnet is taken
	I1025 01:28:55.455275    3628 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:true}} dirty:map[192.168.49.0:0xc000a5e290 192.168.58.0:0xc000a5e378 192.168.67.0:0xc000a5e470] misses:2}
	I1025 01:28:55.455334    3628 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:55.482617    3628 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:true}} dirty:map[192.168.49.0:0xc000a5e290 192.168.58.0:0xc000a5e378 192.168.67.0:0xc000a5e470 192.168.76.0:0xc0006aa550] misses:2}
	I1025 01:28:55.482830    3628 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:55.482830    3628 network_create.go:115] attempt to create docker network NoKubernetes-012456 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 01:28:55.490856    3628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456
	W1025 01:28:55.693197    3628 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456 returned with exit code 1
	W1025 01:28:55.693256    3628 network_create.go:107] failed to create docker network NoKubernetes-012456 192.168.76.0/24, will retry: subnet is taken
	I1025 01:28:55.717796    3628 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:true}} dirty:map[192.168.49.0:0xc000a5e290 192.168.58.0:0xc000a5e378 192.168.67.0:0xc000a5e470 192.168.76.0:0xc0006aa550] misses:3}
	I1025 01:28:55.717796    3628 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:55.745453    3628 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a5e290] amended:true}} dirty:map[192.168.49.0:0xc000a5e290 192.168.58.0:0xc000a5e378 192.168.67.0:0xc000a5e470 192.168.76.0:0xc0006aa550 192.168.85.0:0xc00018f118] misses:3}
	I1025 01:28:55.745453    3628 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:28:55.745453    3628 network_create.go:115] attempt to create docker network NoKubernetes-012456 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 01:28:55.755644    3628 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-012456 NoKubernetes-012456
	I1025 01:28:52.759848   10888 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 01:28:52.774058   10888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-012812
	I1025 01:28:52.988083   10888 main.go:134] libmachine: Using SSH client type: native
	I1025 01:28:52.988083   10888 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 64852 <nil> <nil>}
	I1025 01:28:52.988083   10888 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-10-25 01:26:19 UTC, end at Tue 2022-10-25 01:29:00 UTC. --
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.842030500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.882860100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924414100Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924555300Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924577400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924589000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924600500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.924614400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Oct 25 01:27:53 pause-012456 dockerd[4030]: time="2022-10-25T01:27:53.925058600Z" level=info msg="Loading containers: start."
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.309543200Z" level=info msg="ignoring event" container=499586c4bbe94db6c1f33d9ea0b88e0d0d5252e734eebb56d94069f135ef12bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.369135100Z" level=info msg="ignoring event" container=b237899b1595dc276110f650ba4ef3efef1f20a92a5f73f7c4fc1f7a10d3d4a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.467860100Z" level=info msg="ignoring event" container=299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.472496500Z" level=info msg="ignoring event" container=03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.959504100Z" level=info msg="Removing stale sandbox 93fc0fd8d76e3015b44184307586fa7121224ad08e48fa644ceb6a45eff26ac2 (499586c4bbe94db6c1f33d9ea0b88e0d0d5252e734eebb56d94069f135ef12bb)"
	Oct 25 01:27:54 pause-012456 dockerd[4030]: time="2022-10-25T01:27:54.966748000Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d17fe93ce9172c225e26943b192bcb01635182158229455d01b90dfceadcae2f bc7722bdfac74ec72924fbd2fa7f3d7d4191e2349866a5d3bfa4e48051d629d0], retrying...."
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.206549800Z" level=info msg="Removing stale sandbox 0d9b0d7864a536357671b6773c29b7925ba8a19d75a9af250b7d12fe2995750f (b237899b1595dc276110f650ba4ef3efef1f20a92a5f73f7c4fc1f7a10d3d4a2)"
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.216983300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d17fe93ce9172c225e26943b192bcb01635182158229455d01b90dfceadcae2f eb146ee1462a1bf76f97dfbcc5e359088f2d4cf68f71f7af7de5485f8ef3d1b6], retrying...."
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.317279400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.490481800Z" level=info msg="Loading containers: done."
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.584395400Z" level=info msg="Docker daemon" commit=e42327a graphdriver(s)=overlay2 version=20.10.18
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.584548800Z" level=info msg="Daemon has completed initialization"
	Oct 25 01:27:55 pause-012456 systemd[1]: Started Docker Application Container Engine.
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.657182000Z" level=info msg="API listen on [::]:2376"
	Oct 25 01:27:55 pause-012456 dockerd[4030]: time="2022-10-25T01:27:55.663853400Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 25 01:28:34 pause-012456 dockerd[4030]: time="2022-10-25T01:28:34.013563600Z" level=error msg="Handler for POST /v1.41/containers/9a61857de96a/pause returned error: Cannot pause container 9a61857de96a3e8c49802ee9da9ed1c19d357f354f0e1efa685d91a22624558e: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	c6f5fed8d5a90       6e38f40d628db       37 seconds ago       Running             storage-provisioner       0                   e61a15ac0a82b
	96156d53ee285       6d23ec0e8b87e       43 seconds ago       Running             kube-scheduler            2                   b8028b4301dce
	162f68bfd3ce8       beaaf00edd38a       47 seconds ago       Running             kube-proxy                2                   0e35b01082015
	80a120881748e       5185b96f0becf       About a minute ago   Running             coredns                   1                   fd7c603c9b74f
	48c9607ccbd6d       0346dbd74bcb9       About a minute ago   Running             kube-apiserver            1                   1aadf09053d49
	805d7017a1e7b       6039992312758       About a minute ago   Running             kube-controller-manager   1                   839db953c0135
	9a61857de96a3       a8a176a5d5d69       About a minute ago   Running             etcd                      1                   21921b116293f
	299c257d1eab9       6d23ec0e8b87e       About a minute ago   Exited              kube-scheduler            1                   b237899b1595d
	03fb10b262ea6       beaaf00edd38a       About a minute ago   Exited              kube-proxy                1                   499586c4bbe94
	bf591621d1f2b       5185b96f0becf       About a minute ago   Exited              coredns                   0                   efad283a29afe
	36238dbd7ae7f       6039992312758       2 minutes ago        Exited              kube-controller-manager   0                   88cbc65be559d
	0874739bcbb5b       0346dbd74bcb9       2 minutes ago        Exited              kube-apiserver            0                   d102dbe30c94c
	7518a38e8cedc       a8a176a5d5d69       2 minutes ago        Exited              etcd                      0                   edbd72e6bf321
	
	* 
	* ==> coredns [80a120881748] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [bf591621d1f2] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Oct25 01:01] WSL2: Performing memory compaction.
	[Oct25 01:03] WSL2: Performing memory compaction.
	[Oct25 01:04] WSL2: Performing memory compaction.
	[Oct25 01:05] WSL2: Performing memory compaction.
	[Oct25 01:06] WSL2: Performing memory compaction.
	[Oct25 01:07] WSL2: Performing memory compaction.
	[Oct25 01:08] WSL2: Performing memory compaction.
	[Oct25 01:09] WSL2: Performing memory compaction.
	[Oct25 01:10] WSL2: Performing memory compaction.
	[Oct25 01:11] WSL2: Performing memory compaction.
	[Oct25 01:12] WSL2: Performing memory compaction.
	[Oct25 01:13] WSL2: Performing memory compaction.
	[Oct25 01:14] WSL2: Performing memory compaction.
	[Oct25 01:15] WSL2: Performing memory compaction.
	[Oct25 01:17] WSL2: Performing memory compaction.
	[Oct25 01:18] WSL2: Performing memory compaction.
	[Oct25 01:19] WSL2: Performing memory compaction.
	[Oct25 01:20] WSL2: Performing memory compaction.
	[Oct25 01:21] WSL2: Performing memory compaction.
	[Oct25 01:22] WSL2: Performing memory compaction.
	[Oct25 01:23] WSL2: Performing memory compaction.
	[Oct25 01:24] WSL2: Performing memory compaction.
	[Oct25 01:25] WSL2: Performing memory compaction.
	[Oct25 01:26] process 'docker/tmp/qemu-check146077527/check' started with executable stack
	[Oct25 01:28] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [7518a38e8ced] <==
	* {"level":"warn","ts":"2022-10-25T01:27:22.130Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"692.5441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-012456\" ","response":"range_response_count:1 size:5172"}
	{"level":"info","ts":"2022-10-25T01:27:22.131Z","caller":"traceutil/trace.go:171","msg":"trace[343945099] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-012456; range_end:; response_count:1; response_revision:275; }","duration":"692.6892ms","start":"2022-10-25T01:27:21.438Z","end":"2022-10-25T01:27:22.131Z","steps":["trace[343945099] 'agreement among raft nodes before linearized reading'  (duration: 670.3274ms)","trace[343945099] 'range keys from in-memory index tree'  (duration: 22.1645ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:27:22.131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:27:21.438Z","time spent":"692.8558ms","remote":"127.0.0.1:35546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5195,"request content":"key:\"/registry/pods/kube-system/etcd-pause-012456\" "}
	{"level":"info","ts":"2022-10-25T01:27:28.279Z","caller":"traceutil/trace.go:171","msg":"trace[134672829] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"100.337ms","start":"2022-10-25T01:27:28.179Z","end":"2022-10-25T01:27:28.279Z","steps":["trace[134672829] 'process raft request'  (duration: 100.0326ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:27:28.280Z","caller":"traceutil/trace.go:171","msg":"trace[1278870774] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"101.5496ms","start":"2022-10-25T01:27:28.178Z","end":"2022-10-25T01:27:28.280Z","steps":["trace[1278870774] 'process raft request'  (duration: 88.1302ms)","trace[1278870774] 'compare'  (duration: 11.9333ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:27:28.479Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.9714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2022-10-25T01:27:28.480Z","caller":"traceutil/trace.go:171","msg":"trace[291873918] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:350; }","duration":"102.3084ms","start":"2022-10-25T01:27:28.377Z","end":"2022-10-25T01:27:28.479Z","steps":["trace[291873918] 'agreement among raft nodes before linearized reading'  (duration: 87.5718ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:27:32.874Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"200.5964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-w6fq5\" ","response":"range_response_count:1 size:4417"}
	{"level":"info","ts":"2022-10-25T01:27:32.874Z","caller":"traceutil/trace.go:171","msg":"trace[1876732971] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-w6fq5; range_end:; response_count:1; response_revision:374; }","duration":"200.7091ms","start":"2022-10-25T01:27:32.674Z","end":"2022-10-25T01:27:32.874Z","steps":["trace[1876732971] 'agreement among raft nodes before linearized reading'  (duration: 91.242ms)","trace[1876732971] 'range keys from in-memory index tree'  (duration: 109.314ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:27:32.875Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"109.4861ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638331946385336563 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:373 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-10-25T01:27:32.875Z","caller":"traceutil/trace.go:171","msg":"trace[1457078045] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:386; }","duration":"110.0969ms","start":"2022-10-25T01:27:32.765Z","end":"2022-10-25T01:27:32.875Z","steps":["trace[1457078045] 'read index received'  (duration: 41.578ms)","trace[1457078045] 'applied index is now lower than readState.Index'  (duration: 68.515ms)"],"step_count":2}
	{"level":"info","ts":"2022-10-25T01:27:32.875Z","caller":"traceutil/trace.go:171","msg":"trace[245622856] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"110.3024ms","start":"2022-10-25T01:27:32.765Z","end":"2022-10-25T01:27:32.875Z","steps":["trace[245622856] 'process raft request'  (duration: 109.932ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:27:32.875Z","caller":"traceutil/trace.go:171","msg":"trace[612610884] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"192.047ms","start":"2022-10-25T01:27:32.683Z","end":"2022-10-25T01:27:32.875Z","steps":["trace[612610884] 'process raft request'  (duration: 81.6109ms)","trace[612610884] 'compare'  (duration: 109.1515ms)"],"step_count":2}
	{"level":"info","ts":"2022-10-25T01:27:32.876Z","caller":"traceutil/trace.go:171","msg":"trace[111683511] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"108.7167ms","start":"2022-10-25T01:27:32.768Z","end":"2022-10-25T01:27:32.876Z","steps":["trace[111683511] 'process raft request'  (duration: 107.139ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:27:32.877Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"194.3549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-012456\" ","response":"range_response_count:1 size:4548"}
	{"level":"info","ts":"2022-10-25T01:27:32.877Z","caller":"traceutil/trace.go:171","msg":"trace[99413065] range","detail":"{range_begin:/registry/minions/pause-012456; range_end:; response_count:1; response_revision:377; }","duration":"194.3984ms","start":"2022-10-25T01:27:32.682Z","end":"2022-10-25T01:27:32.877Z","steps":["trace[99413065] 'agreement among raft nodes before linearized reading'  (duration: 194.308ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:27:44.965Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-10-25T01:27:44.965Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-012456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/10/25 01:27:44 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2022/10/25 01:27:44 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/10/25 01:27:45 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-10-25T01:27:45.067Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-10-25T01:27:45.175Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-10-25T01:27:45.178Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-10-25T01:27:45.178Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-012456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [9a61857de96a] <==
	* {"level":"info","ts":"2022-10-25T01:28:11.323Z","caller":"traceutil/trace.go:171","msg":"trace[1596209342] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:64; response_revision:398; }","duration":"2.7312188s","start":"2022-10-25T01:28:08.592Z","end":"2022-10-25T01:28:11.323Z","steps":["trace[1596209342] 'agreement among raft nodes before linearized reading'  (duration: 2.7167943s)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:11.324Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:08.592Z","time spent":"2.7314208s","remote":"127.0.0.1:38156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":64,"response size":57837,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	{"level":"warn","ts":"2022-10-25T01:28:11.325Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.7330405s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2022-10-25T01:28:11.325Z","caller":"traceutil/trace.go:171","msg":"trace[1748078964] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:398; }","duration":"2.7331006s","start":"2022-10-25T01:28:08.592Z","end":"2022-10-25T01:28:11.325Z","steps":["trace[1748078964] 'agreement among raft nodes before linearized reading'  (duration: 2.7169552s)","trace[1748078964] 'range keys from in-memory index tree'  (duration: 16.028ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:11.325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:08.592Z","time spent":"2.733231s","remote":"127.0.0.1:38162","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":465,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2022-10-25T01:28:17.302Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2022-10-25T01:28:17.302Z","caller":"traceutil/trace.go:171","msg":"trace[2132762986] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:452; }","duration":"113.0164ms","start":"2022-10-25T01:28:17.189Z","end":"2022-10-25T01:28:17.302Z","steps":["trace[2132762986] 'agreement among raft nodes before linearized reading'  (duration: 43.1061ms)","trace[2132762986] 'range keys from in-memory index tree'  (duration: 69.3481ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:21.298Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.9595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-012456\" ","response":"range_response_count:1 size:4548"}
	{"level":"info","ts":"2022-10-25T01:28:21.300Z","caller":"traceutil/trace.go:171","msg":"trace[1884648026] range","detail":"{range_begin:/registry/minions/pause-012456; range_end:; response_count:1; response_revision:471; }","duration":"112.0911ms","start":"2022-10-25T01:28:21.187Z","end":"2022-10-25T01:28:21.300Z","steps":["trace[1884648026] 'agreement among raft nodes before linearized reading'  (duration: 78.7687ms)","trace[1884648026] 'range keys from in-memory index tree'  (duration: 32.1535ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.8682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"525.6969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2022-10-25T01:28:22.460Z","caller":"traceutil/trace.go:171","msg":"trace[1167109480] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:485; }","duration":"117.0416ms","start":"2022-10-25T01:28:22.343Z","end":"2022-10-25T01:28:22.460Z","steps":["trace[1167109480] 'range keys from in-memory index tree'  (duration: 116.6822ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:28:22.460Z","caller":"traceutil/trace.go:171","msg":"trace[1903219565] range","detail":"{range_begin:/registry/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0; range_end:; response_count:1; response_revision:485; }","duration":"525.7639ms","start":"2022-10-25T01:28:21.934Z","end":"2022-10-25T01:28:22.460Z","steps":["trace[1903219565] 'range keys from in-memory index tree'  (duration: 525.5791ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:21.934Z","time spent":"525.8321ms","remote":"127.0.0.1:38080","response type":"/etcdserverpb.KV/Range","request count":0,"request size":75,"response count":1,"response size":764,"request content":"key:\"/registry/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0\" "}
	{"level":"warn","ts":"2022-10-25T01:28:22.460Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"532.6543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-012456\" ","response":"range_response_count:1 size:7337"}
	{"level":"info","ts":"2022-10-25T01:28:22.471Z","caller":"traceutil/trace.go:171","msg":"trace[1803246184] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-012456; range_end:; response_count:1; response_revision:485; }","duration":"544.171ms","start":"2022-10-25T01:28:21.927Z","end":"2022-10-25T01:28:22.471Z","steps":["trace[1803246184] 'range keys from in-memory index tree'  (duration: 532.4257ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:22.471Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:21.927Z","time spent":"544.3598ms","remote":"127.0.0.1:38104","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7360,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-012456\" "}
	{"level":"warn","ts":"2022-10-25T01:28:33.470Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638331946401562000,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-10-25T01:28:34.476Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"909.8162ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638331946401562003 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414959909546786193 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-10-25T01:28:34.477Z","caller":"traceutil/trace.go:171","msg":"trace[896272498] linearizableReadLoop","detail":"{readStateIndex:529; appliedIndex:528; }","duration":"1.5072272s","start":"2022-10-25T01:28:32.969Z","end":"2022-10-25T01:28:34.476Z","steps":["trace[896272498] 'read index received'  (duration: 596.7526ms)","trace[896272498] 'applied index is now lower than readState.Index'  (duration: 910.4695ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:34.477Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.5074838s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2022-10-25T01:28:34.477Z","caller":"traceutil/trace.go:171","msg":"trace[691269448] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:504; }","duration":"1.5076488s","start":"2022-10-25T01:28:32.969Z","end":"2022-10-25T01:28:34.477Z","steps":["trace[691269448] 'agreement among raft nodes before linearized reading'  (duration: 1.5073894s)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:28:34.477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:32.969Z","time spent":"1.5078266s","remote":"127.0.0.1:38100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":627,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2022-10-25T01:28:34.477Z","caller":"traceutil/trace.go:171","msg":"trace[164310724] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"1.691858s","start":"2022-10-25T01:28:32.785Z","end":"2022-10-25T01:28:34.477Z","steps":["trace[164310724] 'process raft request'  (duration: 781.1647ms)","trace[164310724] 'compare'  (duration: 909.3344ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:28:34.477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:28:32.785Z","time spent":"1.692345s","remote":"127.0.0.1:37956","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414959909546786193 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >"}
	
	* 
	* ==> kernel <==
	*  01:29:11 up  1:35,  0 users,  load average: 8.98, 6.16, 3.64
	Linux pause-012456 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [0874739bcbb5] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:27:53.456229       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:27:53.506482       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:27:53.510747       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [48c9607ccbd6] <==
	* Trace[693741938]: ---"Listing from storage done" 2737ms (01:28:11.328)
	Trace[693741938]: [2.7385609s] [2.7385609s] END
	I1025 01:28:11.330547       1 trace.go:205] Trace[2130098445]: "Get" url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:def4e624-af31-44e5-a096-9a16f41cdce6,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (25-Oct-2022 01:28:08.589) (total time: 2741ms):
	Trace[2130098445]: ---"About to write a response" 2740ms (01:28:11.330)
	Trace[2130098445]: [2.7410195s] [2.7410195s] END
	I1025 01:28:11.331165       1 trace.go:205] Trace[1016833730]: "Create etcd3" audit-id:265bf050-ac24-4f6b-8faf-ea62e11d7f79,key:/events/kube-system/kube-apiserver-pause-012456.17212b972b04c498,type:*core.Event (25-Oct-2022 01:28:08.869) (total time: 2461ms):
	Trace[1016833730]: ---"TransformToStorage finished" err:<nil> 2387ms (01:28:11.257)
	Trace[1016833730]: [2.4610587s] [2.4610587s] END
	I1025 01:28:11.331448       1 trace.go:205] Trace[358324787]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:265bf050-ac24-4f6b-8faf-ea62e11d7f79,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:28:08.868) (total time: 2462ms):
	Trace[358324787]: ---"Write to database call finished" len:415,err:<nil> 2462ms (01:28:11.331)
	Trace[358324787]: [2.4624534s] [2.4624534s] END
	I1025 01:28:11.338577       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 01:28:16.991022       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1025 01:28:17.128985       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 01:28:17.323954       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 01:28:17.355237       1 controller.go:616] quota admission added evaluator for: endpoints
	I1025 01:28:22.469823       1 trace.go:205] Trace[684228461]: "GuaranteedUpdate etcd3" audit-id:e0d3ad70-6384-4d9f-aae5-4f1e660f6c5a,key:/events/kube-system/kube-scheduler-pause-012456.17212b977baa85e0,type:*core.Event (25-Oct-2022 01:28:21.933) (total time: 535ms):
	Trace[684228461]: ---"initial value restored" 532ms (01:28:22.465)
	Trace[684228461]: [535.9001ms] [535.9001ms] END
	I1025 01:28:22.470335       1 trace.go:205] Trace[192671513]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-scheduler-pause-012456.17212b977baa85e0,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:e0d3ad70-6384-4d9f-aae5-4f1e660f6c5a,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:28:21.933) (total time: 536ms):
	Trace[192671513]: ---"About to apply patch" 532ms (01:28:22.465)
	Trace[192671513]: [536.5979ms] [536.5979ms] END
	I1025 01:28:22.474441       1 trace.go:205] Trace[315186235]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-012456,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:bbd6c6d1-535d-4581-bb49-2854c3ce54a7,client:192.168.76.1,accept:application/json, */*,protocol:HTTP/2.0 (25-Oct-2022 01:28:21.926) (total time: 547ms):
	Trace[315186235]: ---"About to write a response" 547ms (01:28:22.473)
	Trace[315186235]: [547.7475ms] [547.7475ms] END
	
	* 
	* ==> kube-controller-manager [36238dbd7ae7] <==
	* I1025 01:27:26.967862       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1025 01:27:26.968101       1 event.go:294] "Event occurred" object="pause-012456" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-012456 event: Registered Node pause-012456 in Controller"
	I1025 01:27:26.968425       1 shared_informer.go:262] Caches are synced for HPA
	I1025 01:27:26.969777       1 shared_informer.go:262] Caches are synced for job
	I1025 01:27:26.969775       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1025 01:27:26.970144       1 taint_manager.go:209] "Sending events to api server"
	I1025 01:27:26.975899       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 01:27:26.978414       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1025 01:27:26.986851       1 shared_informer.go:262] Caches are synced for expand
	I1025 01:27:27.066267       1 shared_informer.go:262] Caches are synced for disruption
	I1025 01:27:27.067132       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 01:27:27.070108       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:27:27.070307       1 shared_informer.go:262] Caches are synced for ephemeral
	I1025 01:27:27.070777       1 shared_informer.go:262] Caches are synced for PVC protection
	I1025 01:27:27.077437       1 shared_informer.go:262] Caches are synced for stateful set
	I1025 01:27:27.080327       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:27:27.465199       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:27:27.465232       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:27:27.479920       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:27:27.588008       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I1025 01:27:27.685157       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w6fq5"
	I1025 01:27:27.970000       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-9lpwx"
	I1025 01:27:28.006927       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-wfbsl"
	I1025 01:27:28.569616       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I1025 01:27:28.635726       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-9lpwx"
	
	* 
	* ==> kube-controller-manager [805d7017a1e7] <==
	* I1025 01:28:23.968053       1 shared_informer.go:262] Caches are synced for PVC protection
	I1025 01:28:23.968106       1 shared_informer.go:262] Caches are synced for service account
	I1025 01:28:23.968063       1 shared_informer.go:262] Caches are synced for node
	I1025 01:28:23.968243       1 range_allocator.go:166] Starting range CIDR allocator
	I1025 01:28:23.968285       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1025 01:28:23.968374       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1025 01:28:23.968100       1 shared_informer.go:262] Caches are synced for crt configmap
	I1025 01:28:23.969046       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1025 01:28:23.971012       1 shared_informer.go:262] Caches are synced for taint
	I1025 01:28:23.971168       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 01:28:23.971238       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1025 01:28:23.971406       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I1025 01:28:23.971580       1 taint_manager.go:209] "Sending events to api server"
	W1025 01:28:23.971417       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-012456. Assuming now as a timestamp.
	I1025 01:28:23.971731       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1025 01:28:23.971920       1 event.go:294] "Event occurred" object="pause-012456" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-012456 event: Registered Node pause-012456 in Controller"
	I1025 01:28:24.073082       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1025 01:28:24.075551       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1025 01:28:24.079454       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1025 01:28:24.083806       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:28:24.095484       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:28:24.166774       1 shared_informer.go:262] Caches are synced for endpoint
	I1025 01:28:24.420269       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:28:24.420422       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:28:24.476766       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [03fb10b262ea] <==
	* E1025 01:27:47.306686       1 proxier.go:656] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I1025 01:27:47.370437       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:27:47.377724       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:27:47.381433       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:27:47.385735       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:27:47.390120       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E1025 01:27:47.394052       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-012456": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:48.576256       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-012456": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:50.857331       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-012456": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [162f68bfd3ce] <==
	* I1025 01:28:14.192428       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:28:14.196016       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:28:14.199437       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:28:14.266366       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:28:14.274994       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1025 01:28:14.372349       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I1025 01:28:14.372522       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I1025 01:28:14.372785       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 01:28:14.574348       1 server_others.go:206] "Using iptables Proxier"
	I1025 01:28:14.575092       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1025 01:28:14.575651       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1025 01:28:14.575852       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1025 01:28:14.576306       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:28:14.576658       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:28:14.577551       1 server.go:661] "Version info" version="v1.25.3"
	I1025 01:28:14.577578       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:28:14.578833       1 config.go:444] "Starting node config controller"
	I1025 01:28:14.578849       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 01:28:14.579567       1 config.go:317] "Starting service config controller"
	I1025 01:28:14.579582       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 01:28:14.579620       1 config.go:226] "Starting endpoint slice config controller"
	I1025 01:28:14.579630       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 01:28:14.679598       1 shared_informer.go:262] Caches are synced for node config
	I1025 01:28:14.679794       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 01:28:14.680062       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [299c257d1eab] <==
	* W1025 01:27:51.848933       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:51.849066       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:51.871846       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:51.871907       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:51.916932       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:51.917038       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.001559       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.001754       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.353850       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.353984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.501952       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.502056       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.547576       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.547737       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.554071       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.554238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.928112       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.928247       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W1025 01:27:52.960502       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E1025 01:27:52.960638       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 01:27:54.257843       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1025 01:27:54.258607       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1025 01:27:54.258689       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:27:54.259119       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1025 01:27:54.259131       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [96156d53ee28] <==
	* I1025 01:28:19.358101       1 serving.go:348] Generated self-signed cert in-memory
	I1025 01:28:19.930973       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1025 01:28:19.931129       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:28:21.177918       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 01:28:21.177918       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 01:28:21.178060       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 01:28:21.178202       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 01:28:21.178209       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 01:28:21.178234       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:28:21.178346       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 01:28:21.178367       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 01:28:21.278237       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I1025 01:28:21.278300       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:28:21.278484       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-10-25 01:26:19 UTC, end at Tue 2022-10-25 01:29:11 UTC. --
	Oct 25 01:27:59 pause-012456 kubelet[2199]: I1025 01:27:59.668334    2199 status_manager.go:667] "Failed to get status for pod" podUID=172144c1-0526-4f7d-8f6f-e793d007d436 pod="kube-system/kube-proxy-w6fq5" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w6fq5\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 25 01:28:01 pause-012456 kubelet[2199]: I1025 01:28:01.204122    2199 scope.go:115] "RemoveContainer" containerID="03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c"
	Oct 25 01:28:01 pause-012456 kubelet[2199]: E1025 01:28:01.204734    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-proxy pod=kube-proxy-w6fq5_kube-system(172144c1-0526-4f7d-8f6f-e793d007d436)\"" pod="kube-system/kube-proxy-w6fq5" podUID=172144c1-0526-4f7d-8f6f-e793d007d436
	Oct 25 01:28:01 pause-012456 kubelet[2199]: I1025 01:28:01.296320    2199 scope.go:115] "RemoveContainer" containerID="299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd"
	Oct 25 01:28:01 pause-012456 kubelet[2199]: E1025 01:28:01.297047    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-pause-012456_kube-system(0cefbf30c3d96d31f12e31badaea1ba3)\"" pod="kube-system/kube-scheduler-pause-012456" podUID=0cefbf30c3d96d31f12e31badaea1ba3
	Oct 25 01:28:02 pause-012456 kubelet[2199]: I1025 01:28:02.371218    2199 scope.go:115] "RemoveContainer" containerID="03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c"
	Oct 25 01:28:02 pause-012456 kubelet[2199]: E1025 01:28:02.372433    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-proxy pod=kube-proxy-w6fq5_kube-system(172144c1-0526-4f7d-8f6f-e793d007d436)\"" pod="kube-system/kube-proxy-w6fq5" podUID=172144c1-0526-4f7d-8f6f-e793d007d436
	Oct 25 01:28:02 pause-012456 kubelet[2199]: I1025 01:28:02.372437    2199 scope.go:115] "RemoveContainer" containerID="299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd"
	Oct 25 01:28:02 pause-012456 kubelet[2199]: E1025 01:28:02.373020    2199 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-pause-012456_kube-system(0cefbf30c3d96d31f12e31badaea1ba3)\"" pod="kube-system/kube-scheduler-pause-012456" podUID=0cefbf30c3d96d31f12e31badaea1ba3
	Oct 25 01:28:07 pause-012456 kubelet[2199]: E1025 01:28:07.768222    2199 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 25 01:28:07 pause-012456 kubelet[2199]: E1025 01:28:07.770833    2199 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 25 01:28:07 pause-012456 kubelet[2199]: E1025 01:28:07.770894    2199 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 25 01:28:13 pause-012456 kubelet[2199]: I1025 01:28:13.189525    2199 scope.go:115] "RemoveContainer" containerID="03fb10b262ea6b4bd4a8c414fb69058dc7b75184d1e9b6c14baed645c863524c"
	Oct 25 01:28:17 pause-012456 kubelet[2199]: I1025 01:28:17.187945    2199 scope.go:115] "RemoveContainer" containerID="299c257d1eab91c1f40c77668db273653ba158f253ca2706bdf91ca73140d2dd"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.166696    2199 request.go:682] Waited for 1.4251596s due to client-side throttling, not priority and fairness, request: PATCH:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-proxy-w6fq5.17212b9a0237f8ac
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.369846    2199 topology_manager.go:205] "Topology Admit Handler"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: E1025 01:28:21.373477    2199 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="9a7bc2c8-b2ee-4089-9d34-a5fdf7b07e9d" containerName="coredns"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.373850    2199 memory_manager.go:345] "RemoveStaleState removing state" podUID="9a7bc2c8-b2ee-4089-9d34-a5fdf7b07e9d" containerName="coredns"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.473071    2199 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6de82917-024c-4c3a-a639-c4d922fafb55-tmp\") pod \"storage-provisioner\" (UID: \"6de82917-024c-4c3a-a639-c4d922fafb55\") " pod="kube-system/storage-provisioner"
	Oct 25 01:28:21 pause-012456 kubelet[2199]: I1025 01:28:21.473515    2199 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xg22\" (UniqueName: \"kubernetes.io/projected/6de82917-024c-4c3a-a639-c4d922fafb55-kube-api-access-7xg22\") pod \"storage-provisioner\" (UID: \"6de82917-024c-4c3a-a639-c4d922fafb55\") " pod="kube-system/storage-provisioner"
	Oct 25 01:28:23 pause-012456 kubelet[2199]: I1025 01:28:23.567195    2199 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e61a15ac0a82bb5ae70c351a3a40ad6577012b9f29aa18f7153ca875c976e001"
	Oct 25 01:28:33 pause-012456 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Oct 25 01:28:33 pause-012456 kubelet[2199]: I1025 01:28:33.288000    2199 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 01:28:33 pause-012456 systemd[1]: kubelet.service: Succeeded.
	Oct 25 01:28:33 pause-012456 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [c6f5fed8d5a9] <==
	* I1025 01:28:24.789580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 01:28:24.821298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 01:28:24.821466       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 01:28:24.874379       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 01:28:24.875026       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-012456_973ae756-136d-4b17-9d9a-e819a5044960!
	I1025 01:28:24.874724       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f8e7c80-e378-4f67-8f08-76b8231e717b", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-012456_973ae756-136d-4b17-9d9a-e819a5044960 became leader
	I1025 01:28:24.976259       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-012456_973ae756-136d-4b17-9d9a-e819a5044960!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 01:29:10.837650    9612 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-012456 -n pause-012456
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-012456 -n pause-012456: exit status 2 (1.6881306s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-012456" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/Pause (42.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (596.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-012958 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-012958 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (9m56.2515029s)

                                                
                                                
-- stdout --
	* [cilium-012958] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cilium-012958 in cluster cilium-012958
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 01:47:00.540722    4244 out.go:296] Setting OutFile to fd 1820 ...
	I1025 01:47:00.609555    4244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:47:00.609555    4244 out.go:309] Setting ErrFile to fd 1072...
	I1025 01:47:00.609555    4244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:47:00.629718    4244 out.go:303] Setting JSON to false
	I1025 01:47:00.632821    4244 start.go:116] hostinfo: {"hostname":"minikube8","uptime":12065,"bootTime":1666650355,"procs":161,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 01:47:00.632821    4244 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 01:47:00.767443    4244 out.go:177] * [cilium-012958] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 01:47:00.919214    4244 notify.go:220] Checking for updates...
	I1025 01:47:01.008948    4244 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:47:01.218304    4244 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 01:47:01.485182    4244 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 01:47:01.825763    4244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 01:47:01.929948    4244 config.go:180] Loaded profile config "auto-012955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:47:01.930505    4244 config.go:180] Loaded profile config "default-k8s-diff-port-013732": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:47:01.931016    4244 config.go:180] Loaded profile config "newest-cni-014519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:47:01.931084    4244 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 01:47:02.246574    4244 docker.go:137] docker version: linux-20.10.17
	I1025 01:47:02.254371    4244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:47:03.502979    4244 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2484401s)
	I1025 01:47:03.503135    4244 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:76 OomKillDisable:true NGoroutines:60 SystemTime:2022-10-25 01:47:02.4076369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:47:03.719779    4244 out.go:177] * Using the docker driver based on user configuration
	I1025 01:47:03.924515    4244 start.go:282] selected driver: docker
	I1025 01:47:03.924515    4244 start.go:808] validating driver "docker" against <nil>
	I1025 01:47:03.925057    4244 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 01:47:03.991110    4244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:47:04.774268    4244 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:76 OomKillDisable:true NGoroutines:60 SystemTime:2022-10-25 01:47:04.1497908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:47:04.775264    4244 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 01:47:04.778311    4244 start_flags.go:885] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 01:47:04.783293    4244 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 01:47:04.790280    4244 cni.go:95] Creating CNI manager for "cilium"
	I1025 01:47:04.790280    4244 start_flags.go:312] Found "Cilium" CNI - setting NetworkPlugin=cni
	I1025 01:47:04.790280    4244 start_flags.go:317] config:
	{Name:cilium-012958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:47:04.794272    4244 out.go:177] * Starting control plane node cilium-012958 in cluster cilium-012958
	I1025 01:47:04.798272    4244 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 01:47:04.802246    4244 out.go:177] * Pulling base image ...
	I1025 01:47:04.805246    4244 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:47:04.805246    4244 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 01:47:04.805246    4244 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 01:47:04.805246    4244 cache.go:57] Caching tarball of preloaded images
	I1025 01:47:04.806253    4244 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 01:47:04.806253    4244 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 01:47:04.806253    4244 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\config.json ...
	I1025 01:47:04.806253    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\config.json: {Name:mk756de699268a2ccdd1c33f9f75d955e93c4b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:05.043640    4244 image.go:86] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 01:47:05.043640    4244 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 01:47:05.043640    4244 cache.go:208] Successfully downloaded all kic artifacts
	I1025 01:47:05.043640    4244 start.go:364] acquiring machines lock for cilium-012958: {Name:mk1f4eeada389c8d6accc79d00abbffd2eec8f5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:47:05.043640    4244 start.go:368] acquired machines lock for "cilium-012958" in 0s
	I1025 01:47:05.043640    4244 start.go:93] Provisioning new machine with config: &{Name:cilium-012958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-012958 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:47:05.043640    4244 start.go:125] createHost starting for "" (driver="docker")
	I1025 01:47:05.049648    4244 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 01:47:05.049648    4244 start.go:159] libmachine.API.Create for "cilium-012958" (driver="docker")
	I1025 01:47:05.049648    4244 client.go:168] LocalClient.Create starting
	I1025 01:47:05.050653    4244 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I1025 01:47:05.050653    4244 main.go:134] libmachine: Decoding PEM data...
	I1025 01:47:05.050653    4244 main.go:134] libmachine: Parsing certificate...
	I1025 01:47:05.050653    4244 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I1025 01:47:05.050653    4244 main.go:134] libmachine: Decoding PEM data...
	I1025 01:47:05.050653    4244 main.go:134] libmachine: Parsing certificate...
	I1025 01:47:05.063752    4244 cli_runner.go:164] Run: docker network inspect cilium-012958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 01:47:05.307527    4244 cli_runner.go:211] docker network inspect cilium-012958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 01:47:05.318526    4244 network_create.go:272] running [docker network inspect cilium-012958] to gather additional debugging logs...
	I1025 01:47:05.318526    4244 cli_runner.go:164] Run: docker network inspect cilium-012958
	W1025 01:47:05.562561    4244 cli_runner.go:211] docker network inspect cilium-012958 returned with exit code 1
	I1025 01:47:05.562561    4244 network_create.go:275] error running [docker network inspect cilium-012958]: docker network inspect cilium-012958: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-012958
	I1025 01:47:05.562561    4244 network_create.go:277] output of [docker network inspect cilium-012958]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-012958
	
	** /stderr **
	I1025 01:47:05.570377    4244 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 01:47:05.821715    4244 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000132f10] misses:0}
	I1025 01:47:05.821715    4244 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:05.821715    4244 network_create.go:115] attempt to create docker network cilium-012958 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 01:47:05.828708    4244 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-012958 cilium-012958
	W1025 01:47:06.049191    4244 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-012958 cilium-012958 returned with exit code 1
	W1025 01:47:06.049191    4244 network_create.go:107] failed to create docker network cilium-012958 192.168.49.0/24, will retry: subnet is taken
	I1025 01:47:06.069209    4244 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000132f10] amended:false}} dirty:map[] misses:0}
	I1025 01:47:06.069209    4244 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:06.101307    4244 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000132f10] amended:true}} dirty:map[192.168.49.0:0xc000132f10 192.168.58.0:0xc0005c6310] misses:0}
	I1025 01:47:06.101659    4244 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:06.101659    4244 network_create.go:115] attempt to create docker network cilium-012958 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 01:47:06.109812    4244 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-012958 cilium-012958
	W1025 01:47:06.314674    4244 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-012958 cilium-012958 returned with exit code 1
	W1025 01:47:06.314674    4244 network_create.go:107] failed to create docker network cilium-012958 192.168.58.0/24, will retry: subnet is taken
	I1025 01:47:06.337826    4244 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000132f10] amended:true}} dirty:map[192.168.49.0:0xc000132f10 192.168.58.0:0xc0005c6310] misses:1}
	I1025 01:47:06.337973    4244 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:06.362615    4244 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000132f10] amended:true}} dirty:map[192.168.49.0:0xc000132f10 192.168.58.0:0xc0005c6310 192.168.67.0:0xc0005c6528] misses:1}
	I1025 01:47:06.362797    4244 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:06.362921    4244 network_create.go:115] attempt to create docker network cilium-012958 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 01:47:06.374276    4244 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-012958 cilium-012958
	W1025 01:47:06.617513    4244 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-012958 cilium-012958 returned with exit code 1
	W1025 01:47:06.617513    4244 network_create.go:107] failed to create docker network cilium-012958 192.168.67.0/24, will retry: subnet is taken
	I1025 01:47:06.642524    4244 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000132f10] amended:true}} dirty:map[192.168.49.0:0xc000132f10 192.168.58.0:0xc0005c6310 192.168.67.0:0xc0005c6528] misses:2}
	I1025 01:47:06.642524    4244 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:06.665506    4244 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000132f10] amended:true}} dirty:map[192.168.49.0:0xc000132f10 192.168.58.0:0xc0005c6310 192.168.67.0:0xc0005c6528 192.168.76.0:0xc00064cae8] misses:2}
	I1025 01:47:06.665506    4244 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:06.665506    4244 network_create.go:115] attempt to create docker network cilium-012958 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 01:47:06.673506    4244 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-012958 cilium-012958
	I1025 01:47:07.028855    4244 network_create.go:99] docker network cilium-012958 192.168.76.0/24 created
	I1025 01:47:07.028855    4244 kic.go:106] calculated static IP "192.168.76.2" for the "cilium-012958" container
	I1025 01:47:07.042837    4244 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 01:47:07.300379    4244 cli_runner.go:164] Run: docker volume create cilium-012958 --label name.minikube.sigs.k8s.io=cilium-012958 --label created_by.minikube.sigs.k8s.io=true
	I1025 01:47:07.548811    4244 oci.go:103] Successfully created a docker volume cilium-012958
	I1025 01:47:07.555817    4244 cli_runner.go:164] Run: docker run --rm --name cilium-012958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-012958 --entrypoint /usr/bin/test -v cilium-012958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	I1025 01:47:10.845297    4244 cli_runner.go:217] Completed: docker run --rm --name cilium-012958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-012958 --entrypoint /usr/bin/test -v cilium-012958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: (3.2894564s)
	I1025 01:47:10.845297    4244 oci.go:107] Successfully prepared a docker volume cilium-012958
	I1025 01:47:10.845297    4244 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:47:10.845297    4244 kic.go:179] Starting extracting preloaded images to volume ...
	I1025 01:47:10.852333    4244 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-012958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 01:47:33.753695    4244 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-012958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir: (22.901202s)
	I1025 01:47:33.753695    4244 kic.go:188] duration metric: took 22.908238 seconds to extract preloaded images to volume
	I1025 01:47:33.763363    4244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:47:34.371046    4244 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-10-25 01:47:33.9358736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:47:34.380075    4244 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 01:47:34.966839    4244 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-012958 --name cilium-012958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-012958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-012958 --network cilium-012958 --ip 192.168.76.2 --volume cilium-012958:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191
	I1025 01:47:36.467280    4244 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-012958 --name cilium-012958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-012958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-012958 --network cilium-012958 --ip 192.168.76.2 --volume cilium-012958:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191: (1.5004306s)
	I1025 01:47:36.477289    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Running}}
	I1025 01:47:36.711909    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:47:36.946924    4244 cli_runner.go:164] Run: docker exec cilium-012958 stat /var/lib/dpkg/alternatives/iptables
	I1025 01:47:37.353941    4244 oci.go:144] the created container "cilium-012958" has a running status.
	I1025 01:47:37.353941    4244 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa...
	I1025 01:47:38.018307    4244 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 01:47:38.388751    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:47:38.633011    4244 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 01:47:38.633011    4244 kic_runner.go:114] Args: [docker exec --privileged cilium-012958 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 01:47:39.000211    4244 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa...
	I1025 01:47:39.537714    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:47:39.754571    4244 machine.go:88] provisioning docker machine ...
	I1025 01:47:39.754653    4244 ubuntu.go:169] provisioning hostname "cilium-012958"
	I1025 01:47:39.762427    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:39.977432    4244 main.go:134] libmachine: Using SSH client type: native
	I1025 01:47:39.984434    4244 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50301 <nil> <nil>}
	I1025 01:47:39.984434    4244 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-012958 && echo "cilium-012958" | sudo tee /etc/hostname
	I1025 01:47:40.212403    4244 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-012958
	
	I1025 01:47:40.221344    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:40.415305    4244 main.go:134] libmachine: Using SSH client type: native
	I1025 01:47:40.415305    4244 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50301 <nil> <nil>}
	I1025 01:47:40.415305    4244 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-012958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-012958/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-012958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 01:47:40.613214    4244 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:47:40.616218    4244 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I1025 01:47:40.616218    4244 ubuntu.go:177] setting up certificates
	I1025 01:47:40.616218    4244 provision.go:83] configureAuth start
	I1025 01:47:40.623197    4244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-012958
	I1025 01:47:40.817269    4244 provision.go:138] copyHostCerts
	I1025 01:47:40.817269    4244 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I1025 01:47:40.817269    4244 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I1025 01:47:40.817269    4244 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1675 bytes)
	I1025 01:47:40.818269    4244 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I1025 01:47:40.818269    4244 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I1025 01:47:40.819258    4244 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1025 01:47:40.820256    4244 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I1025 01:47:40.820256    4244 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I1025 01:47:40.820256    4244 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1025 01:47:40.821255    4244 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-012958 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-012958]
	I1025 01:47:41.049281    4244 provision.go:172] copyRemoteCerts
	I1025 01:47:41.058260    4244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 01:47:41.065269    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:41.276545    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:47:41.409569    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 01:47:41.459148    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1025 01:47:41.514139    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 01:47:41.564441    4244 provision.go:86] duration metric: configureAuth took 948.216ms
	I1025 01:47:41.564441    4244 ubuntu.go:193] setting minikube options for container-runtime
	I1025 01:47:41.565500    4244 config.go:180] Loaded profile config "cilium-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:47:41.572486    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:41.768385    4244 main.go:134] libmachine: Using SSH client type: native
	I1025 01:47:41.768997    4244 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50301 <nil> <nil>}
	I1025 01:47:41.769101    4244 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 01:47:41.970762    4244 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 01:47:41.970762    4244 ubuntu.go:71] root file system type: overlay
	I1025 01:47:41.971774    4244 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 01:47:41.981766    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:42.178996    4244 main.go:134] libmachine: Using SSH client type: native
	I1025 01:47:42.178996    4244 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50301 <nil> <nil>}
	I1025 01:47:42.178996    4244 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 01:47:42.412990    4244 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 01:47:42.422986    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:42.622843    4244 main.go:134] libmachine: Using SSH client type: native
	I1025 01:47:42.623474    4244 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50301 <nil> <nil>}
	I1025 01:47:42.623543    4244 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 01:47:48.313778    4244 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-09-08 23:09:37.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-25 01:47:42.407470000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 01:47:48.313778    4244 machine.go:91] provisioned docker machine in 8.5591466s
	I1025 01:47:48.313778    4244 client.go:171] LocalClient.Create took 43.263827s
	I1025 01:47:48.313778    4244 start.go:167] duration metric: libmachine.API.Create for "cilium-012958" took 43.263827s
	I1025 01:47:48.313778    4244 start.go:300] post-start starting for "cilium-012958" (driver="docker")
	I1025 01:47:48.313778    4244 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 01:47:48.335782    4244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 01:47:48.351775    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:48.602085    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:47:48.703249    4244 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 01:47:48.717156    4244 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 01:47:48.717156    4244 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 01:47:48.717156    4244 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 01:47:48.717156    4244 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 01:47:48.717156    4244 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I1025 01:47:48.717156    4244 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I1025 01:47:48.718167    4244 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem -> 42002.pem in /etc/ssl/certs
	I1025 01:47:48.730164    4244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 01:47:48.750448    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /etc/ssl/certs/42002.pem (1708 bytes)
	I1025 01:47:48.804273    4244 start.go:303] post-start completed in 490.492ms
	I1025 01:47:48.815850    4244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-012958
	I1025 01:47:49.027722    4244 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\config.json ...
	I1025 01:47:49.038719    4244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:47:49.045707    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:49.276918    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:47:49.415887    4244 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:47:49.427893    4244 start.go:128] duration metric: createHost completed in 44.3839424s
	I1025 01:47:49.427893    4244 start.go:83] releasing machines lock for "cilium-012958", held for 44.3839424s
	I1025 01:47:49.438893    4244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-012958
	I1025 01:47:49.643764    4244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 01:47:49.652770    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:49.652770    4244 ssh_runner.go:195] Run: systemctl --version
	I1025 01:47:49.659781    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:49.864765    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:47:49.892832    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:47:50.102047    4244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 01:47:50.127903    4244 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1025 01:47:50.177619    4244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:47:50.385594    4244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 01:47:50.615764    4244 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 01:47:50.641752    4244 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 01:47:50.651749    4244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 01:47:50.699250    4244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 01:47:50.745130    4244 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 01:47:50.964991    4244 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 01:47:51.221985    4244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:47:51.437511    4244 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 01:47:55.779567    4244 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.3420258s)
	I1025 01:47:55.797358    4244 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 01:47:56.001365    4244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:47:56.240127    4244 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 01:47:56.265039    4244 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 01:47:56.275072    4244 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 01:47:56.290052    4244 start.go:472] Will wait 60s for crictl version
	I1025 01:47:56.304040    4244 ssh_runner.go:195] Run: sudo crictl version
	I1025 01:47:56.386063    4244 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 01:47:56.398060    4244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:47:56.474033    4244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:47:56.557038    4244 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 01:47:56.564050    4244 cli_runner.go:164] Run: docker exec -t cilium-012958 dig +short host.docker.internal
	I1025 01:47:56.926501    4244 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 01:47:56.940478    4244 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 01:47:56.952471    4244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:47:56.993522    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:47:57.211895    4244 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:47:57.223040    4244 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:47:57.280320    4244 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:47:57.280320    4244 docker.go:542] Images already preloaded, skipping extraction
	I1025 01:47:57.300892    4244 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:47:57.364550    4244 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:47:57.364550    4244 cache_images.go:84] Images are preloaded, skipping loading
	I1025 01:47:57.372555    4244 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 01:47:57.556443    4244 cni.go:95] Creating CNI manager for "cilium"
	I1025 01:47:57.556443    4244 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 01:47:57.556443    4244 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-012958 NodeName:cilium-012958 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 01:47:57.556443    4244 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cilium-012958"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 01:47:57.556443    4244 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-012958 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:cilium-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I1025 01:47:57.566437    4244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 01:47:57.596899    4244 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 01:47:57.614891    4244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 01:47:57.685421    4244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1025 01:47:57.729664    4244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 01:47:57.765668    4244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1025 01:47:57.822291    4244 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 01:47:57.831300    4244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:47:57.853302    4244 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958 for IP: 192.168.76.2
	I1025 01:47:57.853302    4244 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I1025 01:47:57.853302    4244 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I1025 01:47:57.854304    4244 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\client.key
	I1025 01:47:57.854304    4244 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\client.crt with IP's: []
	I1025 01:47:58.394844    4244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\client.crt ...
	I1025 01:47:58.394844    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\client.crt: {Name:mk5c53721d5c009fdf2494ac59f1fd4a2b68f556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:58.396831    4244 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\client.key ...
	I1025 01:47:58.396831    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\client.key: {Name:mka8b4573ee4412ec4000665023a33fd211f38b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:58.402031    4244 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.key.31bdca25
	I1025 01:47:58.402837    4244 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 01:47:58.864357    4244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.crt.31bdca25 ...
	I1025 01:47:58.865362    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.crt.31bdca25: {Name:mk2b34873225c4cecc95dc60b70ebe1dba5ef1c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:58.866381    4244 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.key.31bdca25 ...
	I1025 01:47:58.866381    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.key.31bdca25: {Name:mk715f9dcdb37cd36be49eb03826842e8256fc6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:58.867357    4244 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.crt.31bdca25 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.crt
	I1025 01:47:58.873355    4244 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.key.31bdca25 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.key
	I1025 01:47:58.875368    4244 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.key
	I1025 01:47:58.875794    4244 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.crt with IP's: []
	I1025 01:47:59.205017    4244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.crt ...
	I1025 01:47:59.205017    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.crt: {Name:mk2a65da0e1d13abc0cb86fd2cefa9405df0e82c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:59.206189    4244 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.key ...
	I1025 01:47:59.206189    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.key: {Name:mk22f4cfe2cef88db940ff16a508de40e96b347e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:59.214630    4244 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem (1338 bytes)
	W1025 01:47:59.214630    4244 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200_empty.pem, impossibly tiny 0 bytes
	I1025 01:47:59.215627    4244 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1025 01:47:59.215627    4244 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1025 01:47:59.215627    4244 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1025 01:47:59.215627    4244 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1025 01:47:59.216628    4244 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem (1708 bytes)
	I1025 01:47:59.218635    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 01:47:59.282018    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 01:47:59.345068    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 01:47:59.410749    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-012958\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 01:47:59.469876    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 01:47:59.544570    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 01:47:59.612409    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 01:47:59.751393    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 01:47:59.818965    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /usr/share/ca-certificates/42002.pem (1708 bytes)
	I1025 01:47:59.887451    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 01:47:59.954080    4244 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem --> /usr/share/ca-certificates/4200.pem (1338 bytes)
	I1025 01:48:00.023057    4244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 01:48:00.076096    4244 ssh_runner.go:195] Run: openssl version
	I1025 01:48:00.115091    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4200.pem && ln -fs /usr/share/ca-certificates/4200.pem /etc/ssl/certs/4200.pem"
	I1025 01:48:00.151761    4244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4200.pem
	I1025 01:48:00.167741    4244 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 25 00:08 /usr/share/ca-certificates/4200.pem
	I1025 01:48:00.183769    4244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4200.pem
	I1025 01:48:00.223755    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4200.pem /etc/ssl/certs/51391683.0"
	I1025 01:48:00.258767    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42002.pem && ln -fs /usr/share/ca-certificates/42002.pem /etc/ssl/certs/42002.pem"
	I1025 01:48:00.310771    4244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42002.pem
	I1025 01:48:00.329767    4244 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 25 00:08 /usr/share/ca-certificates/42002.pem
	I1025 01:48:00.353792    4244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42002.pem
	I1025 01:48:00.391754    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42002.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 01:48:00.439965    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 01:48:00.483970    4244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:00.497955    4244 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 25 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:00.515957    4244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:00.544954    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 01:48:00.571020    4244 kubeadm.go:396] StartCluster: {Name:cilium-012958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:48:00.586960    4244 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 01:48:00.697771    4244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 01:48:00.756756    4244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 01:48:00.888996    4244 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1025 01:48:00.908772    4244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 01:48:00.945797    4244 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 01:48:00.945797    4244 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 01:48:01.086407    4244 kubeadm.go:317] W1025 01:48:01.082846    1238 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 01:48:01.179350    4244 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 01:48:01.389942    4244 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 01:48:27.023621    4244 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1025 01:48:27.023621    4244 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 01:48:27.023621    4244 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 01:48:27.024622    4244 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 01:48:27.024622    4244 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 01:48:27.024622    4244 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 01:48:27.027604    4244 out.go:204]   - Generating certificates and keys ...
	I1025 01:48:27.027604    4244 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 01:48:27.027604    4244 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cilium-012958 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cilium-012958 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 01:48:27.029608    4244 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 01:48:27.030604    4244 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 01:48:27.033614    4244 out.go:204]   - Booting up control plane ...
	I1025 01:48:27.033614    4244 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 01:48:27.033614    4244 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 01:48:27.033614    4244 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 01:48:27.033614    4244 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 01:48:27.034603    4244 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 01:48:27.034603    4244 kubeadm.go:317] [apiclient] All control plane components are healthy after 18.014737 seconds
	I1025 01:48:27.034603    4244 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 01:48:27.034603    4244 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 01:48:27.034603    4244 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 01:48:27.035604    4244 kubeadm.go:317] [mark-control-plane] Marking the node cilium-012958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 01:48:27.035604    4244 kubeadm.go:317] [bootstrap-token] Using token: 6cqtb7.jlzs1c3oqwmk1k6o
	I1025 01:48:27.038602    4244 out.go:204]   - Configuring RBAC rules ...
	I1025 01:48:27.038602    4244 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 01:48:27.038602    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 01:48:27.038602    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 01:48:27.039612    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 01:48:27.039612    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 01:48:27.039612    4244 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 01:48:27.039612    4244 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 01:48:27.039612    4244 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1025 01:48:27.040602    4244 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.040602    4244 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.040602    4244 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.040602    4244 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1025 01:48:27.040602    4244 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 01:48:27.040602    4244 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1025 01:48:27.041606    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 01:48:27.041606    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1025 01:48:27.041606    4244 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 01:48:27.041606    4244 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 01:48:27.041606    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 01:48:27.041606    4244 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1025 01:48:27.042604    4244 kubeadm.go:317] 
	I1025 01:48:27.042604    4244 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 6cqtb7.jlzs1c3oqwmk1k6o \
	I1025 01:48:27.042604    4244 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 \
	I1025 01:48:27.042604    4244 kubeadm.go:317] 	--control-plane 
	I1025 01:48:27.042604    4244 kubeadm.go:317] 
	I1025 01:48:27.042604    4244 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1025 01:48:27.042604    4244 kubeadm.go:317] 
	I1025 01:48:27.042604    4244 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 6cqtb7.jlzs1c3oqwmk1k6o \
	I1025 01:48:27.043954    4244 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 
	I1025 01:48:27.043954    4244 cni.go:95] Creating CNI manager for "cilium"
	I1025 01:48:27.044860    4244 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I1025 01:48:27.060089    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I1025 01:48:27.201794    4244 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I1025 01:48:27.201853    4244 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I1025 01:48:27.201940    4244 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I1025 01:48:27.202088    4244 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1025 01:48:27.202088    4244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I1025 01:48:27.324697    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 01:48:29.836466    4244 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.5117535s)
	I1025 01:48:29.836466    4244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 01:48:29.847475    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4 minikube.k8s.io/name=cilium-012958 minikube.k8s.io/updated_at=2022_10_25T01_48_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:29.847475    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:29.853478    4244 ops.go:34] apiserver oom_adj: -16
	I1025 01:48:30.142793    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:30.812289    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:31.317546    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:31.812819    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:32.316838    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:32.815586    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:33.306794    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:33.817377    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:34.308283    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:34.808780    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:35.303490    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:35.809521    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:36.308799    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:36.810943    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:37.302851    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:37.805286    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:38.313891    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:38.815392    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:39.312494    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:39.811389    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:40.215441    4244 kubeadm.go:1067] duration metric: took 10.3789117s to wait for elevateKubeSystemPrivileges.
	I1025 01:48:40.215441    4244 kubeadm.go:398] StartCluster complete in 39.6441597s
	I1025 01:48:40.215441    4244 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:40.215441    4244 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:40.220468    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:40.820498    4244 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-012958" rescaled to 1
	I1025 01:48:40.820498    4244 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:48:40.820498    4244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 01:48:40.823494    4244 out.go:177] * Verifying Kubernetes components...
	I1025 01:48:40.820498    4244 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1025 01:48:40.821469    4244 config.go:180] Loaded profile config "cilium-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:40.827539    4244 addons.go:65] Setting storage-provisioner=true in profile "cilium-012958"
	I1025 01:48:40.827539    4244 addons.go:153] Setting addon storage-provisioner=true in "cilium-012958"
	W1025 01:48:40.827539    4244 addons.go:162] addon storage-provisioner should already be in state true
	I1025 01:48:40.827539    4244 addons.go:65] Setting default-storageclass=true in profile "cilium-012958"
	I1025 01:48:40.827539    4244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-012958"
	I1025 01:48:40.827539    4244 host.go:66] Checking if "cilium-012958" exists ...
	I1025 01:48:40.849472    4244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:48:40.857483    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:48:40.859534    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:48:41.157487    4244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 01:48:41.158482    4244 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:41.159498    4244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 01:48:41.166488    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:48:41.194495    4244 addons.go:153] Setting addon default-storageclass=true in "cilium-012958"
	W1025 01:48:41.194495    4244 addons.go:162] addon default-storageclass should already be in state true
	I1025 01:48:41.194495    4244 host.go:66] Checking if "cilium-012958" exists ...
	I1025 01:48:41.224500    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:48:41.438483    4244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 01:48:41.447480    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:48:41.452484    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:48:41.508502    4244 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:41.508502    4244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 01:48:41.526503    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:48:41.760497    4244 node_ready.go:35] waiting up to 5m0s for node "cilium-012958" to be "Ready" ...
	I1025 01:48:41.788494    4244 node_ready.go:49] node "cilium-012958" has status "Ready":"True"
	I1025 01:48:41.788494    4244 node_ready.go:38] duration metric: took 27.9963ms waiting for node "cilium-012958" to be "Ready" ...
	I1025 01:48:41.788494    4244 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:48:41.815814    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:48:41.889695    4244 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-656749584-pwt27" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:42.016736    4244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:42.313875    4244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:42.619481    4244 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.1809893s)
	I1025 01:48:42.619481    4244 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1025 01:48:43.428677    4244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.411931s)
	I1025 01:48:43.428677    4244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1147944s)
	I1025 01:48:43.431683    4244 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 01:48:43.435674    4244 addons.go:414] enableAddons completed in 2.6151575s
	I1025 01:48:44.035993    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:46.505992    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:48.527878    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:51.015241    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:53.579426    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:55.980050    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:56.103979    4244 pod_ready.go:92] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:56.103979    4244 pod_ready.go:81] duration metric: took 14.2141846s waiting for pod "cilium-operator-656749584-pwt27" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:56.103979    4244 pod_ready.go:78] waiting up to 5m0s for pod "cilium-wr8k8" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:58.286265    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:00.481139    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:02.925248    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:05.424138    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:07.920351    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:09.981408    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:16.733117    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:18.999840    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:21.094181    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:23.405237    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:25.413360    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:27.914860    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:30.479200    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:32.778978    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:35.089342    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:37.410910    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:39.481228    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:41.919230    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:44.411569    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:48.098965    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:50.501033    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:56.634555    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:58.910094    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:00.931640    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:03.437796    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:05.916635    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:08.412619    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:10.414827    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:13.537561    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:15.913549    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:18.699127    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:20.913036    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:23.416403    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:25.513667    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:27.942609    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:30.411226    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:32.424418    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:34.908406    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:36.911670    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:38.919963    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:40.925357    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:43.416608    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:45.902769    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:47.919115    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:50.415441    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:52.905337    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:54.905365    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:56.910675    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:58.911992    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:00.914297    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:02.919432    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:05.410232    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:07.904047    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:10.407317    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:12.914615    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:15.415104    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:17.416248    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:19.426082    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:21.915313    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:24.423285    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:26.909356    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:28.991937    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:31.410809    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:33.906908    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:35.913584    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:38.406473    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:40.418166    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:42.917423    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:45.411432    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:47.412866    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:49.423361    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:51.912271    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:53.997572    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:56.410885    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:58.414480    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:00.419541    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:02.916939    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:04.919486    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:06.927088    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:09.404441    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:11.908618    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:14.413589    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:16.901943    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:18.912193    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:21.408397    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:23.920655    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:26.404552    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:28.419185    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:30.900247    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:32.926710    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:35.414302    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:37.905368    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:40.423376    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:42.902959    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:44.907847    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:46.912714    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:49.412810    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:51.412972    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:53.909695    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:55.920809    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:56.429191    4244 pod_ready.go:81] duration metric: took 4m0.3235151s waiting for pod "cilium-wr8k8" in "kube-system" namespace to be "Ready" ...
	E1025 01:52:56.429191    4244 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1025 01:52:56.429191    4244 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-fzlj8" in "kube-system" namespace to be "Ready" ...
	I1025 01:52:58.496163    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:00.979925    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:03.475600    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:05.475741    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:07.483168    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:09.489154    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:11.493323    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:13.975411    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:15.990512    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:18.479527    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:20.481144    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:22.979498    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:24.993027    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:27.472370    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:29.474499    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:31.476152    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:33.490834    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:35.499954    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:37.975982    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:39.997060    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:42.486242    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:44.990724    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:46.999604    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:49.478597    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:51.996340    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:54.474171    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:56.493735    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:58.979626    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:00.983824    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:03.494135    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:05.979828    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:08.481092    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:10.481830    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:12.491955    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:14.977806    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:16.988335    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:19.480591    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:21.982276    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:24.596664    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:26.974598    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:28.983914    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:31.491607    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:33.974119    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:36.479741    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:43.030561    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:45.479273    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:47.483893    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:49.501419    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:51.996737    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:54.488100    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:56.996471    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:59.488947    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:02.853551    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:05.952548    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:07.985418    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:10.489676    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:12.989816    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:16.028227    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:18.481191    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:20.490888    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:22.999424    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:25.483090    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:27.970548    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:29.984229    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:31.989794    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:34.481782    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:36.689039    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:39.053921    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:43.885819    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:45.974719    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:47.984941    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:51.294895    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:53.471041    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:55.476556    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:57.486846    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:55:59.988132    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:02.484901    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:04.492486    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:06.975697    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:09.048417    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:11.396782    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:13.684195    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:15.995668    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:18.480177    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:20.489735    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:22.979887    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:25.487847    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:27.980049    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:29.983524    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:32.488573    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:34.489475    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:36.988020    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:38.988377    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:41.479549    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:43.499339    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:45.974466    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:47.994010    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:50.475590    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:52.488313    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:54.986352    4244 pod_ready.go:102] pod "coredns-565d847f94-fzlj8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:56:56.501657    4244 pod_ready.go:81] duration metric: took 4m0.0707903s waiting for pod "coredns-565d847f94-fzlj8" in "kube-system" namespace to be "Ready" ...
	E1025 01:56:56.501657    4244 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1025 01:56:56.502205    4244 pod_ready.go:38] duration metric: took 8m14.7102387s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:56:56.504631    4244 out.go:177] 
	W1025 01:56:56.507243    4244 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1025 01:56:56.507243    4244 out.go:239] * 
	* 
	W1025 01:56:56.509925    4244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 01:56:56.511982    4244 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (596.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (415.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-012958 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-012958 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (6m55.7284116s)

                                                
                                                
-- stdout --
	* [calico-012958] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node calico-012958 in cluster calico-012958
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 01:47:26.389343   10588 out.go:296] Setting OutFile to fd 2024 ...
	I1025 01:47:26.447545   10588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:47:26.447545   10588 out.go:309] Setting ErrFile to fd 1604...
	I1025 01:47:26.447545   10588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:47:26.490975   10588 out.go:303] Setting JSON to false
	I1025 01:47:26.494778   10588 start.go:116] hostinfo: {"hostname":"minikube8","uptime":12090,"bootTime":1666650356,"procs":160,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 01:47:26.494937   10588 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 01:47:26.713981   10588 out.go:177] * [calico-012958] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 01:47:26.888473   10588 notify.go:220] Checking for updates...
	I1025 01:47:27.062903   10588 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:47:27.322816   10588 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 01:47:27.719710   10588 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 01:47:28.038867   10588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 01:47:28.179659   10588 config.go:180] Loaded profile config "auto-012955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:47:28.179842   10588 config.go:180] Loaded profile config "cilium-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:47:28.180470   10588 config.go:180] Loaded profile config "newest-cni-014519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:47:28.180594   10588 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 01:47:28.493118   10588 docker.go:137] docker version: linux-20.10.17
	I1025 01:47:28.501533   10588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:47:29.075286   10588 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:80 OomKillDisable:true NGoroutines:68 SystemTime:2022-10-25 01:47:28.6721084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:47:29.079460   10588 out.go:177] * Using the docker driver based on user configuration
	I1025 01:47:29.086289   10588 start.go:282] selected driver: docker
	I1025 01:47:29.086289   10588 start.go:808] validating driver "docker" against <nil>
	I1025 01:47:29.086289   10588 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 01:47:29.150033   10588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:47:29.762063   10588 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:80 OomKillDisable:true NGoroutines:68 SystemTime:2022-10-25 01:47:29.326212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:47:29.762063   10588 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 01:47:29.762800   10588 start_flags.go:885] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 01:47:29.873488   10588 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 01:47:30.070406   10588 cni.go:95] Creating CNI manager for "calico"
	I1025 01:47:30.070477   10588 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1025 01:47:30.070477   10588 start_flags.go:317] config:
	{Name:calico-012958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:47:30.269736   10588 out.go:177] * Starting control plane node calico-012958 in cluster calico-012958
	I1025 01:47:30.466192   10588 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 01:47:30.609920   10588 out.go:177] * Pulling base image ...
	I1025 01:47:30.761176   10588 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:47:30.761464   10588 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 01:47:30.761504   10588 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 01:47:30.761504   10588 cache.go:57] Caching tarball of preloaded images
	I1025 01:47:30.762208   10588 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 01:47:30.762208   10588 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 01:47:30.762208   10588 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\config.json ...
	I1025 01:47:30.762892   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\config.json: {Name:mkf7a4af146b4e97e7e958745def7b0cd04408e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:47:31.022959   10588 image.go:86] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 01:47:31.023297   10588 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 01:47:31.023297   10588 cache.go:208] Successfully downloaded all kic artifacts
	I1025 01:47:31.023527   10588 start.go:364] acquiring machines lock for calico-012958: {Name:mk956264318b77816f52af5ca33977387beed1a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:47:31.023812   10588 start.go:368] acquired machines lock for "calico-012958" in 284.5µs
	I1025 01:47:31.024085   10588 start.go:93] Provisioning new machine with config: &{Name:calico-012958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-012958 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:47:31.024386   10588 start.go:125] createHost starting for "" (driver="docker")
	I1025 01:47:31.037078   10588 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 01:47:31.037078   10588 start.go:159] libmachine.API.Create for "calico-012958" (driver="docker")
	I1025 01:47:31.037078   10588 client.go:168] LocalClient.Create starting
	I1025 01:47:31.038076   10588 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I1025 01:47:31.038076   10588 main.go:134] libmachine: Decoding PEM data...
	I1025 01:47:31.038076   10588 main.go:134] libmachine: Parsing certificate...
	I1025 01:47:31.038076   10588 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I1025 01:47:31.038076   10588 main.go:134] libmachine: Decoding PEM data...
	I1025 01:47:31.038076   10588 main.go:134] libmachine: Parsing certificate...
	I1025 01:47:31.052090   10588 cli_runner.go:164] Run: docker network inspect calico-012958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 01:47:31.259973   10588 cli_runner.go:211] docker network inspect calico-012958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 01:47:31.271113   10588 network_create.go:272] running [docker network inspect calico-012958] to gather additional debugging logs...
	I1025 01:47:31.271113   10588 cli_runner.go:164] Run: docker network inspect calico-012958
	W1025 01:47:31.462664   10588 cli_runner.go:211] docker network inspect calico-012958 returned with exit code 1
	I1025 01:47:31.582595   10588 network_create.go:275] error running [docker network inspect calico-012958]: docker network inspect calico-012958: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-012958
	I1025 01:47:31.583230   10588 network_create.go:277] output of [docker network inspect calico-012958]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-012958
	
	** /stderr **
	I1025 01:47:31.590733   10588 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 01:47:31.798172   10588 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00052c340] misses:0}
	I1025 01:47:31.798262   10588 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:31.798262   10588 network_create.go:115] attempt to create docker network calico-012958 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 01:47:31.806282   10588 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-012958 calico-012958
	W1025 01:47:32.027679   10588 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-012958 calico-012958 returned with exit code 1
	W1025 01:47:32.027679   10588 network_create.go:107] failed to create docker network calico-012958 192.168.49.0/24, will retry: subnet is taken
	I1025 01:47:32.063868   10588 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00052c340] amended:false}} dirty:map[] misses:0}
	I1025 01:47:32.063868   10588 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:32.084719   10588 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00052c340] amended:true}} dirty:map[192.168.49.0:0xc00052c340 192.168.58.0:0xc00014b710] misses:0}
	I1025 01:47:32.084871   10588 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:32.084934   10588 network_create.go:115] attempt to create docker network calico-012958 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 01:47:32.094394   10588 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-012958 calico-012958
	W1025 01:47:32.306031   10588 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-012958 calico-012958 returned with exit code 1
	W1025 01:47:32.306031   10588 network_create.go:107] failed to create docker network calico-012958 192.168.58.0/24, will retry: subnet is taken
	I1025 01:47:32.327878   10588 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00052c340] amended:true}} dirty:map[192.168.49.0:0xc00052c340 192.168.58.0:0xc00014b710] misses:1}
	I1025 01:47:32.327878   10588 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:32.346356   10588 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00052c340] amended:true}} dirty:map[192.168.49.0:0xc00052c340 192.168.58.0:0xc00014b710 192.168.67.0:0xc00014b7a8] misses:1}
	I1025 01:47:32.347427   10588 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:47:32.347427   10588 network_create.go:115] attempt to create docker network calico-012958 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 01:47:32.356097   10588 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-012958 calico-012958
	I1025 01:47:32.679393   10588 network_create.go:99] docker network calico-012958 192.168.67.0/24 created
	I1025 01:47:32.679393   10588 kic.go:106] calculated static IP "192.168.67.2" for the "calico-012958" container
	I1025 01:47:32.714399   10588 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 01:47:32.941690   10588 cli_runner.go:164] Run: docker volume create calico-012958 --label name.minikube.sigs.k8s.io=calico-012958 --label created_by.minikube.sigs.k8s.io=true
	I1025 01:47:33.169978   10588 oci.go:103] Successfully created a docker volume calico-012958
	I1025 01:47:33.179930   10588 cli_runner.go:164] Run: docker run --rm --name calico-012958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-012958 --entrypoint /usr/bin/test -v calico-012958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	I1025 01:47:35.472825   10588 cli_runner.go:217] Completed: docker run --rm --name calico-012958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-012958 --entrypoint /usr/bin/test -v calico-012958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: (2.2928787s)
	I1025 01:47:35.472825   10588 oci.go:107] Successfully prepared a docker volume calico-012958
	I1025 01:47:35.472825   10588 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:47:35.472825   10588 kic.go:179] Starting extracting preloaded images to volume ...
	I1025 01:47:35.483829   10588 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-012958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 01:47:57.243403   10588 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-012958:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir: (21.7594215s)
	I1025 01:47:57.243513   10588 kic.go:188] duration metric: took 21.770536 seconds to extract preloaded images to volume
	I1025 01:47:57.251309   10588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:47:57.874312   10588 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:83 OomKillDisable:true NGoroutines:61 SystemTime:2022-10-25 01:47:57.4416645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:47:57.889309   10588 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 01:47:58.491194   10588 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-012958 --name calico-012958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-012958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-012958 --network calico-012958 --ip 192.168.67.2 --volume calico-012958:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191
	I1025 01:47:59.885425   10588 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-012958 --name calico-012958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-012958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-012958 --network calico-012958 --ip 192.168.67.2 --volume calico-012958:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191: (1.3942209s)
	I1025 01:47:59.899509   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Running}}
	I1025 01:48:00.145742   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:00.435203   10588 cli_runner.go:164] Run: docker exec calico-012958 stat /var/lib/dpkg/alternatives/iptables
	I1025 01:48:00.967744   10588 oci.go:144] the created container "calico-012958" has a running status.
	I1025 01:48:00.967744   10588 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa...
	I1025 01:48:01.296401   10588 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 01:48:01.738840   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:02.026659   10588 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 01:48:02.026659   10588 kic_runner.go:114] Args: [docker exec --privileged calico-012958 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 01:48:02.535694   10588 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa...
	I1025 01:48:03.148299   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:03.400363   10588 machine.go:88] provisioning docker machine ...
	I1025 01:48:03.400363   10588 ubuntu.go:169] provisioning hostname "calico-012958"
	I1025 01:48:03.406399   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:03.640966   10588 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:03.641971   10588 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50354 <nil> <nil>}
	I1025 01:48:03.641971   10588 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-012958 && echo "calico-012958" | sudo tee /etc/hostname
	I1025 01:48:03.854508   10588 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-012958
	
	I1025 01:48:03.865961   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:04.117298   10588 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:04.118291   10588 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50354 <nil> <nil>}
	I1025 01:48:04.118291   10588 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-012958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-012958/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-012958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 01:48:04.390385   10588 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:48:04.390385   10588 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I1025 01:48:04.390385   10588 ubuntu.go:177] setting up certificates
	I1025 01:48:04.390385   10588 provision.go:83] configureAuth start
	I1025 01:48:04.400367   10588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-012958
	I1025 01:48:04.650367   10588 provision.go:138] copyHostCerts
	I1025 01:48:04.650367   10588 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I1025 01:48:04.650367   10588 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I1025 01:48:04.651375   10588 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1025 01:48:04.652367   10588 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I1025 01:48:04.652367   10588 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I1025 01:48:04.652367   10588 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1675 bytes)
	I1025 01:48:04.653365   10588 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I1025 01:48:04.653365   10588 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I1025 01:48:04.654368   10588 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1025 01:48:04.654368   10588 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-012958 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-012958]
	I1025 01:48:04.778907   10588 provision.go:172] copyRemoteCerts
	I1025 01:48:04.794007   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 01:48:04.808502   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:05.013078   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:05.156100   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 01:48:05.213394   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1025 01:48:05.286283   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 01:48:05.361589   10588 provision.go:86] duration metric: configureAuth took 971.1973ms
	I1025 01:48:05.361688   10588 ubuntu.go:193] setting minikube options for container-runtime
	I1025 01:48:05.362382   10588 config.go:180] Loaded profile config "calico-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:05.372789   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:05.605937   10588 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:05.605937   10588 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50354 <nil> <nil>}
	I1025 01:48:05.606937   10588 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 01:48:05.742987   10588 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 01:48:05.742987   10588 ubuntu.go:71] root file system type: overlay
	I1025 01:48:05.742987   10588 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 01:48:05.753943   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:05.957329   10588 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:05.957329   10588 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50354 <nil> <nil>}
	I1025 01:48:05.957329   10588 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 01:48:06.232445   10588 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 01:48:06.240449   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:06.483404   10588 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:06.484094   10588 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50354 <nil> <nil>}
	I1025 01:48:06.484094   10588 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 01:48:07.988187   10588 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-09-08 23:09:37.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-25 01:48:06.214875000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 01:48:07.988187   10588 machine.go:91] provisioned docker machine in 4.5877919s
	I1025 01:48:07.988187   10588 client.go:171] LocalClient.Create took 36.9508501s
	I1025 01:48:07.988187   10588 start.go:167] duration metric: libmachine.API.Create for "calico-012958" took 36.9508501s
	I1025 01:48:07.988187   10588 start.go:300] post-start starting for "calico-012958" (driver="docker")
	I1025 01:48:07.988187   10588 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 01:48:08.010282   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 01:48:08.020998   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:08.222081   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:08.360102   10588 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 01:48:08.371546   10588 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 01:48:08.372590   10588 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 01:48:08.372590   10588 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 01:48:08.372590   10588 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 01:48:08.372590   10588 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I1025 01:48:08.372590   10588 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I1025 01:48:08.373586   10588 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem -> 42002.pem in /etc/ssl/certs
	I1025 01:48:08.390574   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 01:48:08.416569   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /etc/ssl/certs/42002.pem (1708 bytes)
	I1025 01:48:08.464548   10588 start.go:303] post-start completed in 476.3582ms
	I1025 01:48:08.474584   10588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-012958
	I1025 01:48:08.714587   10588 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\config.json ...
	I1025 01:48:08.731551   10588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:48:08.735547   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:08.963550   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:09.118216   10588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:48:09.132228   10588 start.go:128] duration metric: createHost completed in 38.1075755s
	I1025 01:48:09.132228   10588 start.go:83] releasing machines lock for "calico-012958", held for 38.1080283s
	I1025 01:48:09.145217   10588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-012958
	I1025 01:48:09.425142   10588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 01:48:09.447276   10588 ssh_runner.go:195] Run: systemctl --version
	I1025 01:48:09.448142   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:09.459590   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:09.694138   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:09.706168   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:09.834108   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 01:48:09.916106   10588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1025 01:48:09.964103   10588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:10.128598   10588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 01:48:10.343502   10588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 01:48:10.371644   10588 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 01:48:10.382640   10588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 01:48:10.416596   10588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 01:48:10.471935   10588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 01:48:10.724153   10588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 01:48:10.949266   10588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:11.180698   10588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 01:48:11.910142   10588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 01:48:12.114079   10588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:12.305100   10588 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 01:48:12.331098   10588 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 01:48:12.344290   10588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 01:48:12.357283   10588 start.go:472] Will wait 60s for crictl version
	I1025 01:48:12.371290   10588 ssh_runner.go:195] Run: sudo crictl version
	I1025 01:48:12.458084   10588 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 01:48:12.465481   10588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:48:12.564500   10588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:48:12.656558   10588 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 01:48:12.664569   10588 cli_runner.go:164] Run: docker exec -t calico-012958 dig +short host.docker.internal
	I1025 01:48:13.089876   10588 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 01:48:13.105897   10588 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 01:48:13.116880   10588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:48:13.150185   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:13.355841   10588 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:48:13.363459   10588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:48:13.423206   10588 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:48:13.423206   10588 docker.go:542] Images already preloaded, skipping extraction
	I1025 01:48:13.431181   10588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:48:13.500721   10588 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:48:13.500721   10588 cache_images.go:84] Images are preloaded, skipping loading
	I1025 01:48:13.509759   10588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 01:48:13.692825   10588 cni.go:95] Creating CNI manager for "calico"
	I1025 01:48:13.692825   10588 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 01:48:13.692825   10588 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-012958 NodeName:calico-012958 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 01:48:13.693521   10588 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-012958"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 01:48:13.693724   10588 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-012958 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1025 01:48:13.707327   10588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 01:48:13.734826   10588 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 01:48:13.745358   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 01:48:13.787561   10588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1025 01:48:13.832592   10588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 01:48:13.868456   10588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1025 01:48:13.923288   10588 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1025 01:48:13.933289   10588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:48:13.961312   10588 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958 for IP: 192.168.67.2
	I1025 01:48:13.961903   10588 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I1025 01:48:13.962152   10588 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I1025 01:48:13.962700   10588 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.key
	I1025 01:48:13.962778   10588 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.crt with IP's: []
	I1025 01:48:14.503607   10588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.crt ...
	I1025 01:48:14.503703   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.crt: {Name:mke0d8cb06416f502e6bcf65687095510450a54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.505541   10588 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.key ...
	I1025 01:48:14.505625   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.key: {Name:mk568a00cc73687479aeb8bbbf37e26ab417138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.506440   10588 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e
	I1025 01:48:14.506440   10588 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 01:48:14.723739   10588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e ...
	I1025 01:48:14.723739   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e: {Name:mk3c787c606463584820032d54fc0a7605009379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.724747   10588 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e ...
	I1025 01:48:14.724747   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e: {Name:mkcb9282ccf2a1f51186f962f8338e1502e42eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.725769   10588 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt
	I1025 01:48:14.732753   10588 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key
	I1025 01:48:14.733775   10588 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key
	I1025 01:48:14.734743   10588 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt with IP's: []
	I1025 01:48:14.899928   10588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt ...
	I1025 01:48:14.899928   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt: {Name:mk1cb3c8e163d99a20ecc7a3178be6ed3576502c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.901499   10588 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key ...
	I1025 01:48:14.901499   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key: {Name:mka6f6bb8292025d453be0bbe5e383e9b983ccad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.911566   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem (1338 bytes)
	W1025 01:48:14.911566   10588 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200_empty.pem, impossibly tiny 0 bytes
	I1025 01:48:14.911566   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1025 01:48:14.912568   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1025 01:48:14.912568   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1025 01:48:14.912568   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1025 01:48:14.913569   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem (1708 bytes)
	I1025 01:48:14.914572   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 01:48:14.968993   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 01:48:15.035988   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 01:48:15.085963   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 01:48:15.140000   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 01:48:15.204346   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 01:48:15.252332   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 01:48:15.302328   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 01:48:15.351331   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 01:48:15.407330   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem --> /usr/share/ca-certificates/4200.pem (1338 bytes)
	I1025 01:48:15.456837   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /usr/share/ca-certificates/42002.pem (1708 bytes)
	I1025 01:48:15.510446   10588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 01:48:15.565102   10588 ssh_runner.go:195] Run: openssl version
	I1025 01:48:15.606110   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42002.pem && ln -fs /usr/share/ca-certificates/42002.pem /etc/ssl/certs/42002.pem"
	I1025 01:48:15.646094   10588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42002.pem
	I1025 01:48:15.657106   10588 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 25 00:08 /usr/share/ca-certificates/42002.pem
	I1025 01:48:15.666080   10588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42002.pem
	I1025 01:48:15.694118   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42002.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 01:48:15.729107   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 01:48:15.766731   10588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:15.777749   10588 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 25 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:15.791762   10588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:15.815737   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 01:48:15.845732   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4200.pem && ln -fs /usr/share/ca-certificates/4200.pem /etc/ssl/certs/4200.pem"
	I1025 01:48:15.890734   10588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4200.pem
	I1025 01:48:15.902715   10588 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 25 00:08 /usr/share/ca-certificates/4200.pem
	I1025 01:48:15.912710   10588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4200.pem
	I1025 01:48:15.934710   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4200.pem /etc/ssl/certs/51391683.0"
	I1025 01:48:15.955720   10588 kubeadm.go:396] StartCluster: {Name:calico-012958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:48:15.962760   10588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 01:48:16.040325   10588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 01:48:16.072326   10588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 01:48:16.096335   10588 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1025 01:48:16.106323   10588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 01:48:16.138322   10588 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 01:48:16.138322   10588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 01:48:16.233038   10588 kubeadm.go:317] W1025 01:48:16.230616    1227 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 01:48:16.319162   10588 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 01:48:16.505400   10588 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 01:48:43.379547   10588 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1025 01:48:43.379547   10588 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 01:48:43.380104   10588 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 01:48:43.380344   10588 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 01:48:43.380344   10588 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 01:48:43.380691   10588 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 01:48:43.386367   10588 out.go:204]   - Generating certificates and keys ...
	I1025 01:48:43.386778   10588 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 01:48:43.386861   10588 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 01:48:43.387019   10588 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 01:48:43.387272   10588 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1025 01:48:43.387272   10588 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1025 01:48:43.387571   10588 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1025 01:48:43.387571   10588 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1025 01:48:43.388188   10588 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-012958 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 01:48:43.388188   10588 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1025 01:48:43.388188   10588 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-012958 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 01:48:43.388730   10588 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 01:48:43.388906   10588 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 01:48:43.388906   10588 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1025 01:48:43.388906   10588 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 01:48:43.388906   10588 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 01:48:43.389881   10588 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 01:48:43.389881   10588 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 01:48:43.389881   10588 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 01:48:43.390625   10588 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 01:48:43.390625   10588 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 01:48:43.390625   10588 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 01:48:43.390625   10588 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 01:48:43.393631   10588 out.go:204]   - Booting up control plane ...
	I1025 01:48:43.393631   10588 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 01:48:43.393631   10588 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 01:48:43.393631   10588 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 01:48:43.394631   10588 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 01:48:43.394631   10588 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 01:48:43.394631   10588 kubeadm.go:317] [apiclient] All control plane components are healthy after 20.006498 seconds
	I1025 01:48:43.395636   10588 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 01:48:43.395636   10588 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 01:48:43.395636   10588 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 01:48:43.396625   10588 kubeadm.go:317] [mark-control-plane] Marking the node calico-012958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 01:48:43.396625   10588 kubeadm.go:317] [bootstrap-token] Using token: kcm1je.niaeugxnay31jj1b
	I1025 01:48:43.399624   10588 out.go:204]   - Configuring RBAC rules ...
	I1025 01:48:43.399624   10588 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 01:48:43.399624   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 01:48:43.399624   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 01:48:43.400602   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 01:48:43.401440   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 01:48:43.401729   10588 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 01:48:43.401998   10588 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 01:48:43.402191   10588 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1025 01:48:43.402191   10588 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1025 01:48:43.402191   10588 kubeadm.go:317] 
	I1025 01:48:43.402895   10588 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1025 01:48:43.402969   10588 kubeadm.go:317] 
	I1025 01:48:43.403045   10588 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1025 01:48:43.403045   10588 kubeadm.go:317] 
	I1025 01:48:43.403045   10588 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1025 01:48:43.403045   10588 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 01:48:43.403045   10588 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 01:48:43.403606   10588 kubeadm.go:317] 
	I1025 01:48:43.403668   10588 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1025 01:48:43.403668   10588 kubeadm.go:317] 
	I1025 01:48:43.403846   10588 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 01:48:43.403846   10588 kubeadm.go:317] 
	I1025 01:48:43.403846   10588 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1025 01:48:43.403846   10588 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 01:48:43.403846   10588 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 01:48:43.403846   10588 kubeadm.go:317] 
	I1025 01:48:43.403846   10588 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 01:48:43.404679   10588 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1025 01:48:43.404679   10588 kubeadm.go:317] 
	I1025 01:48:43.404679   10588 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token kcm1je.niaeugxnay31jj1b \
	I1025 01:48:43.404679   10588 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 \
	I1025 01:48:43.404679   10588 kubeadm.go:317] 	--control-plane 
	I1025 01:48:43.404679   10588 kubeadm.go:317] 
	I1025 01:48:43.405671   10588 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1025 01:48:43.405671   10588 kubeadm.go:317] 
	I1025 01:48:43.405671   10588 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token kcm1je.niaeugxnay31jj1b \
	I1025 01:48:43.405671   10588 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 
	I1025 01:48:43.405671   10588 cni.go:95] Creating CNI manager for "calico"
	I1025 01:48:43.412678   10588 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1025 01:48:43.416682   10588 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1025 01:48:43.416682   10588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I1025 01:48:43.700866   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 01:48:48.396393   10588 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (4.6954938s)
	I1025 01:48:48.396393   10588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 01:48:48.422885   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4 minikube.k8s.io/name=calico-012958 minikube.k8s.io/updated_at=2022_10_25T01_48_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:48.425903   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:48.439878   10588 ops.go:34] apiserver oom_adj: -16
	I1025 01:48:48.905924   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:49.659580   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:50.156985   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:50.660199   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:51.160664   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:51.662596   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:52.167824   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:52.660408   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:53.170199   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:53.658914   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:54.163017   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:54.665611   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:55.156608   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:56.183023   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:57.088317   10588 kubeadm.go:1067] duration metric: took 8.6918636s to wait for elevateKubeSystemPrivileges.
	I1025 01:48:57.088317   10588 kubeadm.go:398] StartCluster complete in 41.1332869s
	I1025 01:48:57.088317   10588 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:57.088317   10588 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:57.092317   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:57.924190   10588 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-012958" rescaled to 1
	I1025 01:48:57.924190   10588 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:48:57.927180   10588 out.go:177] * Verifying Kubernetes components...
	I1025 01:48:57.924190   10588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 01:48:57.924190   10588 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1025 01:48:57.925186   10588 config.go:180] Loaded profile config "calico-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:57.931211   10588 addons.go:65] Setting storage-provisioner=true in profile "calico-012958"
	I1025 01:48:57.931211   10588 addons.go:65] Setting default-storageclass=true in profile "calico-012958"
	I1025 01:48:57.931211   10588 addons.go:153] Setting addon storage-provisioner=true in "calico-012958"
	W1025 01:48:57.931211   10588 addons.go:162] addon storage-provisioner should already be in state true
	I1025 01:48:57.931211   10588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-012958"
	I1025 01:48:57.931211   10588 host.go:66] Checking if "calico-012958" exists ...
	I1025 01:48:57.950182   10588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:48:57.960186   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:57.962184   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:58.061193   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:58.323263   10588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 01:48:58.326264   10588 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:58.326264   10588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 01:48:58.343302   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:58.385307   10588 addons.go:153] Setting addon default-storageclass=true in "calico-012958"
	W1025 01:48:58.385307   10588 addons.go:162] addon default-storageclass should already be in state true
	I1025 01:48:58.385307   10588 host.go:66] Checking if "calico-012958" exists ...
	I1025 01:48:58.419267   10588 node_ready.go:35] waiting up to 5m0s for node "calico-012958" to be "Ready" ...
	I1025 01:48:58.419267   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:58.583809   10588 node_ready.go:49] node "calico-012958" has status "Ready":"True"
	I1025 01:48:58.583809   10588 node_ready.go:38] duration metric: took 164.5403ms waiting for node "calico-012958" to be "Ready" ...
	I1025 01:48:58.583809   10588 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:48:58.618783   10588 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:58.627802   10588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 01:48:58.664788   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:58.711805   10588 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:58.711805   10588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 01:48:58.726830   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:59.012995   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:59.510415   10588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:59.726048   10588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 01:49:00.792448   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:03.298250   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:05.300129   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:06.198451   10588 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.5705957s)
	I1025 01:49:06.198451   10588 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1025 01:49:06.688443   10588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.1779777s)
	I1025 01:49:06.689457   10588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.9633595s)
	I1025 01:49:06.691432   10588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 01:49:06.697432   10588 addons.go:414] enableAddons completed in 8.7731798s
	I1025 01:49:07.718550   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:09.790514   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:16.611472   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:18.720116   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:20.783596   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:22.982041   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:25.275383   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:27.290705   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:29.294622   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:32.631013   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:34.875597   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:37.282040   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:39.707081   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:41.725218   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:43.777489   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:45.794882   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:48.588303   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:50.787000   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:56.638537   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:58.791090   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:01.296845   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:03.795390   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:06.280583   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:08.392641   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:10.734371   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:12.781448   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:14.781690   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:16.792499   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:19.704566   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:21.729953   10588 pod_ready.go:92] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"True"
	I1025 01:50:21.729953   10588 pod_ready.go:81] duration metric: took 1m23.1105886s waiting for pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace to be "Ready" ...
	I1025 01:50:21.729953   10588 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-b76rz" in "kube-system" namespace to be "Ready" ...
	I1025 01:50:23.795713   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:26.978546   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:29.292695   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:31.789240   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:33.793721   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:36.299749   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:38.789721   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:40.796308   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:43.285753   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:45.301559   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:47.786423   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:49.787156   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:51.790883   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:54.281428   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:56.294408   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:50:58.299611   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:00.787291   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:02.801444   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:05.282999   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:07.301440   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:09.796580   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:12.298419   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:14.780577   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:16.787941   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:18.796880   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:21.292305   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:23.784018   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:25.789066   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:27.801892   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:30.286383   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:32.294098   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:34.782800   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:36.799611   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:39.291843   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:41.293391   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:43.794714   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:46.295760   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:48.299077   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:50.781167   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:52.799521   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:55.300072   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:51:57.797418   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:00.296499   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:02.781083   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:04.796174   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:07.296334   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:09.325725   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:11.794620   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:14.285139   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:16.781688   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:18.785985   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:21.282479   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:23.293544   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:25.793350   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:27.797473   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:30.289541   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:32.295196   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:34.384863   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:36.795215   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:39.288090   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:41.800931   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:44.288724   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:46.297663   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:48.793607   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:51.280938   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:53.296969   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:55.789504   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:52:57.795766   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:00.289251   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:02.303290   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:04.781087   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:06.789508   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:09.295817   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:11.785019   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:13.800519   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:16.283039   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:18.299446   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:20.791657   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:23.278186   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:25.297274   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:27.798942   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:30.290915   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:32.299950   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:34.301065   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:36.799916   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:39.306383   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:41.791758   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:43.795653   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:46.297440   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:48.297494   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:50.791124   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:52.798182   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:55.288278   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:57.294282   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:53:59.304172   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:01.794970   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:04.302579   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:06.778070   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:08.794868   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:10.803856   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:13.287621   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:15.293093   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:17.295284   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:19.301564   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:21.789357   10588 pod_ready.go:102] pod "calico-node-b76rz" in "kube-system" namespace has status "Ready":"False"
	I1025 01:54:21.806535   10588 pod_ready.go:81] duration metric: took 4m0.0748797s waiting for pod "calico-node-b76rz" in "kube-system" namespace to be "Ready" ...
	E1025 01:54:21.806535   10588 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1025 01:54:21.806535   10588 pod_ready.go:38] duration metric: took 5m23.2204425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:54:21.809717   10588 out.go:177] 
	W1025 01:54:21.812123   10588 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1025 01:54:21.812123   10588 out.go:239] * 
	* 
	W1025 01:54:21.814015   10588 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 01:54:21.817113   10588 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (415.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (43.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-014519 --alsologtostderr -v=1
E1025 01:49:04.772546    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-014519 --alsologtostderr -v=1: exit status 80 (5.3329629s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-014519 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 01:49:00.614198    4104 out.go:296] Setting OutFile to fd 1504 ...
	I1025 01:49:00.705905    4104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:49:00.705905    4104 out.go:309] Setting ErrFile to fd 1756...
	I1025 01:49:00.705905    4104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:49:00.727442    4104 out.go:303] Setting JSON to false
	I1025 01:49:00.727442    4104 mustload.go:65] Loading cluster: newest-cni-014519
	I1025 01:49:00.729406    4104 config.go:180] Loaded profile config "newest-cni-014519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:49:00.757410    4104 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:49:01.059081    4104 host.go:66] Checking if "newest-cni-014519" exists ...
	I1025 01:49:01.067059    4104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:49:01.360591    4104 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks
:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.27.0-1666206003-15159/minikube-v1.27.0-1666206003-15159-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.27.0-1666206003-15159-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) me
mory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube8:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-014519 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 socket-vmnet-client-path:/opt/socket_vmnet/bin/socket_vmnet_client socket-vmnet-path:/var/run/socket_vmnet ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 01:49:01.364577    4104 out.go:177] * Pausing node newest-cni-014519 ... 
	I1025 01:49:01.366577    4104 host.go:66] Checking if "newest-cni-014519" exists ...
	I1025 01:49:01.388546    4104 ssh_runner.go:195] Run: systemctl --version
	I1025 01:49:01.403330    4104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:49:01.678328    4104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:49:01.939440    4104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:49:02.213539    4104 pause.go:51] kubelet running: true
	I1025 01:49:02.233518    4104 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 01:49:03.157218    4104 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I1025 01:49:03.351279    4104 docker.go:460] Pausing containers: [3ee97de7a1ce d7d47d6175b2 a2108138e300 47738d6c8227 375e90585029 c22cc3b5e7fc 6fdfd9f56268 d4b7a7ce03a2 40651d26ca2a d728c043f9f7 d5d01d92e7ed 4f0fa2a45d19 eaae751434a8 ad84ef6daead]
	I1025 01:49:03.360250    4104 ssh_runner.go:195] Run: docker pause 3ee97de7a1ce d7d47d6175b2 a2108138e300 47738d6c8227 375e90585029 c22cc3b5e7fc 6fdfd9f56268 d4b7a7ce03a2 40651d26ca2a d728c043f9f7 d5d01d92e7ed 4f0fa2a45d19 eaae751434a8 ad84ef6daead
	I1025 01:49:04.096967    4104 out.go:177] 
	W1025 01:49:04.099974    4104 out.go:239] X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause 3ee97de7a1ce d7d47d6175b2 a2108138e300 47738d6c8227 375e90585029 c22cc3b5e7fc 6fdfd9f56268 d4b7a7ce03a2 40651d26ca2a d728c043f9f7 d5d01d92e7ed 4f0fa2a45d19 eaae751434a8 ad84ef6daead: Process exited with status 1
	stdout:
	a2108138e300
	47738d6c8227
	375e90585029
	c22cc3b5e7fc
	6fdfd9f56268
	d4b7a7ce03a2
	40651d26ca2a
	d728c043f9f7
	d5d01d92e7ed
	4f0fa2a45d19
	eaae751434a8
	ad84ef6daead
	
	stderr:
	Error response from daemon: Container 3ee97de7a1ce626673af734ff712924242e2d515fabd82803f3ca71b42ee152a is not running
	Error response from daemon: Container d7d47d6175b20dd0398269055aba4f03a47477527f8d5df6b5885dd1e11f02e5 is not running
	
	X Exiting due to GUEST_PAUSE: pausing containers: docker: docker pause 3ee97de7a1ce d7d47d6175b2 a2108138e300 47738d6c8227 375e90585029 c22cc3b5e7fc 6fdfd9f56268 d4b7a7ce03a2 40651d26ca2a d728c043f9f7 d5d01d92e7ed 4f0fa2a45d19 eaae751434a8 ad84ef6daead: Process exited with status 1
	stdout:
	a2108138e300
	47738d6c8227
	375e90585029
	c22cc3b5e7fc
	6fdfd9f56268
	d4b7a7ce03a2
	40651d26ca2a
	d728c043f9f7
	d5d01d92e7ed
	4f0fa2a45d19
	eaae751434a8
	ad84ef6daead
	
	stderr:
	Error response from daemon: Container 3ee97de7a1ce626673af734ff712924242e2d515fabd82803f3ca71b42ee152a is not running
	Error response from daemon: Container d7d47d6175b20dd0398269055aba4f03a47477527f8d5df6b5885dd1e11f02e5 is not running
	
	W1025 01:49:04.100965    4104 out.go:239] * 
	* 
	W1025 01:49:05.544122    4104 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_49.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_pause_af5e6777317b02357cc1bb6c73885f084c0a6c97_49.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 01:49:05.549164    4104 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-windows-amd64.exe pause -p newest-cni-014519 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-014519
helpers_test.go:235: (dbg) docker inspect newest-cni-014519:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6",
	        "Created": "2022-10-25T01:46:01.6707992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-25T01:48:10.7022639Z",
	            "FinishedAt": "2022-10-25T01:48:04.5419745Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/hostname",
	        "HostsPath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/hosts",
	        "LogPath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6-json.log",
	        "Name": "/newest-cni-014519",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-014519:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7-init/diff:/var/lib/docker/overlay2/1d72d69c076943d6cd413bc50b6a474779145c6396136b4aef1829c16f4a6d69/diff:/var/lib/docker/overlay2/2712457ef6b3ec08714d64e5261a9b327c3f8db2156d7a1b493340af804c46f1/diff:/var/lib/docker/overlay2/956ad2e584ed04429b79ab0ee4bdc8977af3fcfbab3cc0ed570922cc07ffd0a6/diff:/var/lib/docker/overlay2/c4f80c5076f71429b4266dc613d1850e7295faded99f05e04fcb13d2cb4d3157/diff:/var/lib/docker/overlay2/18b12a09b44604345877d4490348801b993263f747090a3a48eac835ac323d86/diff:/var/lib/docker/overlay2/6ce1e052ac8d5221cb1978a93a4c4d18c74da80e998b6e54246cdc95997a769f/diff:/var/lib/docker/overlay2/9e6e7c177b550c9c4fc4af8222ccc9bfe5b01fa177f08388c541fde750e4df80/diff:/var/lib/docker/overlay2/c56ad1fbd8fd09ba635cb91b82c303fab8be925f82edac48c47ed2b99f054b36/diff:/var/lib/docker/overlay2/b4a229acad56b83bd9d04813f3f4cf0c8c562169b12ef1e88243f4588d0b28f9/diff:/var/lib/docker/overlay2/56f30b
af9b74a7e6afda16e0f90a1863a3db06b5fec5cf06828152edc0faa420/diff:/var/lib/docker/overlay2/4275e6a6be34231198b756601a3b51a1d8446e8830b1c4037b20370047b88b9e/diff:/var/lib/docker/overlay2/0a9f47913b546daa2d558a978beaaa9e1e7e73a568fa1ee9d198e1e2154d3f75/diff:/var/lib/docker/overlay2/f1895cfb690eaa9bf966dd3f040878344a80c0dc3606dd2d5e67d9495cfa3ff8/diff:/var/lib/docker/overlay2/84335bbaf957cb1942f1d774b817e78297dbe5ffeb7e2e406e7492cf5a720c7e/diff:/var/lib/docker/overlay2/d9a26e65c06347ae6f8f306617639febfee5427dffa6d33a6acb3abfc22092fb/diff:/var/lib/docker/overlay2/a6893072e83e913a455da1f55020a69e4cd75c9ca7b9893e47d184eaf0da806d/diff:/var/lib/docker/overlay2/2d4c8dbcc1a6e63159280d831a4e448df4587dae065b53837a0e735e579361c4/diff:/var/lib/docker/overlay2/6fd2d854ad2aede74411487bcfe2f1fa3c4e1bbfad739455a690a5801c7c9d18/diff:/var/lib/docker/overlay2/d8435d49436e1e6d94054688732a28cdf047031ca600d938ab879a3f72791749/diff:/var/lib/docker/overlay2/618bd9835cc6596945db86c2cd23a6ea6c60992ff42cb8ba7a13f96776d79bb3/diff:/var/lib/d
ocker/overlay2/8e9af4c331a1374dad5f203889fa4953cd3111c705011d2f885ce8a3a04daf2c/diff:/var/lib/docker/overlay2/b8b4d702f888aa572be928e4e449cfaed5da2a045d94f145c0d48b2f838a2dc5/diff:/var/lib/docker/overlay2/6b708706c388c674df30fea4b16deb3b96447089d2a1cd5341ef199bd5dc3c4e/diff:/var/lib/docker/overlay2/f3bab3644fefb2215fd7b4b857958be30f575fd080ec37030b8b970e46155cdc/diff:/var/lib/docker/overlay2/809d38d9cc75c39f4eab1c2c64257e010b66f6dd17717a251371701f51b07237/diff:/var/lib/docker/overlay2/b2fc12e35954dea9baf6e418bbc1b629a71863e855e4373e8d665590cd7cbc54/diff:/var/lib/docker/overlay2/34dcaea23605015741cd4c620ce445c935ca6a08892a5aa15165a8422bb013c0/diff:/var/lib/docker/overlay2/4c362976bdb9f18c68d5c294dc08d7939899992ed5f8bb13ab34f58ec03fcdd6/diff:/var/lib/docker/overlay2/316879c125d7c6ab5ddb970715d730f6a9ea41f2b58da1ac9379b1d528a25970/diff:/var/lib/docker/overlay2/241a6ea1a0e862f8ac9d51e14f03999907acd9030349143120fad52b3c1c2b97/diff:/var/lib/docker/overlay2/c64f861002875793ea9a7d58a0e0b96ad95c3c7fb2874b758d4fb1bc26c
34587/diff:/var/lib/docker/overlay2/9b91106560e299e000b1229f3c2774c8ff0b881dbb4a27b80b89d0287f2f581d/diff:/var/lib/docker/overlay2/48a0a6d3a2a4100e68d167121a7df5a2244821b71406e29d5cc8220307ed9847/diff:/var/lib/docker/overlay2/1f280e54c1637034501f87fed8ca123799984880082b190271d5fa183974cb70/diff:/var/lib/docker/overlay2/8b8d91bd6daf07b06612bec716b08ed3d8032a4caa291548eead78a2b2c7e037/diff:/var/lib/docker/overlay2/b3ab8284e9708da3d4a94f3bd549609f23fcc286b4c1522cdb244344a4957bba/diff:/var/lib/docker/overlay2/7cc92644ec11a70cec25faf398c533eaa555c3a0ab3e783bf6f0cb342f18de20/diff:/var/lib/docker/overlay2/7f44e48c3f9293e16b6fedacc411012e83674000293a110908fcbe7b8aa0f56c/diff:/var/lib/docker/overlay2/7ded7fd7dc10119d3c74efa565ab8580571328086d82d5e795e7adcd3276e653/diff:/var/lib/docker/overlay2/b4654f15c85f235a8a9d5b03067d9aacd8d02569b48170551e8cc1fb340698ad/diff:/var/lib/docker/overlay2/901a06d4c922f4dcb994eec1c950879f560844312e104093523c1f1637594c70/diff:/var/lib/docker/overlay2/0fdbbeb11fdbed96bd80868c62d4c13bf887e7
83043225667d2bde711d03b757/diff",
	                "MergedDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-014519",
	                "Source": "/var/lib/docker/volumes/newest-cni-014519/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-014519",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-014519",
	                "name.minikube.sigs.k8s.io": "newest-cni-014519",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ac6ba89d1f1480230cb193557db85aec0735e65ddd6ee8e54cc0af6bc3fc6a6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50399"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50398"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9ac6ba89d1f1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "59dd7bd9956d3c371671e9429da5e61a79cca582c848dd2a23d7fca2654cac72",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "3d13f6afc0480320c24c724d761e552bf16a8baec115a212b99351bb4c3bc4ea",
	                    "EndpointID": "59dd7bd9956d3c371671e9429da5e61a79cca582c848dd2a23d7fca2654cac72",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519: exit status 2 (2.1447477s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-014519 logs -n 25
E1025 01:49:10.619364    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:49:11.594134    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 01:49:15.698372    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:15.713330    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:15.728834    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:15.759240    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:15.806884    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:15.901644    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:16.074754    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:16.401483    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:17.045551    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:49:18.339812    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-014519 logs -n 25: (14.6370657s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| pause   | -p no-preload-013544                                       | no-preload-013544            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:45 GMT | 25 Oct 22 01:45 GMT |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| unpause | -p no-preload-013544                                       | no-preload-013544            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:45 GMT | 25 Oct 22 01:46 GMT |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p no-preload-013544                                       | no-preload-013544            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	| ssh     | -p old-k8s-version-013521 sudo                             | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | crictl images -o json                                      |                              |                   |         |                     |                     |
	| delete  | -p no-preload-013544                                       | no-preload-013544            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	| pause   | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| start   | -p auto-012955 --memory=2048                               | auto-012955                  | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:48 GMT |
	|         | --alsologtostderr                                          |                              |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| unpause | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |                   |         |                     |                     |
	| delete  | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	| pause   | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:47 GMT |
	| start   | -p cilium-012958 --memory=2048                             | cilium-012958                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT | 25 Oct 22 01:47 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT | 25 Oct 22 01:47 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	| start   | -p calico-012958 --memory=2048                             | calico-012958                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=calico                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-014519                 | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT | 25 Oct 22 01:48 GMT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |                   |         |                     |                     |
	| stop    | -p newest-cni-014519                                       | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | --alsologtostderr -v=3                                     |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-014519                      | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |                   |         |                     |                     |
	| start   | -p newest-cni-014519 --memory=2200 --alsologtostderr       | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |                   |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.25.3               |                              |                   |         |                     |                     |
	| ssh     | -p auto-012955 pgrep -a                                    | auto-012955                  | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | kubelet                                                    |                              |                   |         |                     |                     |
	| ssh     | -p newest-cni-014519 sudo                                  | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:49 GMT |
	|         | crictl images -o json                                      |                              |                   |         |                     |                     |
	| pause   | -p newest-cni-014519                                       | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:49 GMT |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p auto-012955                                             | auto-012955                  | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:49 GMT |                     |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 01:48:07
	Running on machine: minikube8
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 01:48:07.434497    8088 out.go:296] Setting OutFile to fd 1956 ...
	I1025 01:48:07.501099    8088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:48:07.501099    8088 out.go:309] Setting ErrFile to fd 2000...
	I1025 01:48:07.501099    8088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:48:07.522597    8088 out.go:303] Setting JSON to false
	I1025 01:48:07.525592    8088 start.go:116] hostinfo: {"hostname":"minikube8","uptime":12131,"bootTime":1666650356,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 01:48:07.525592    8088 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 01:48:07.528708    8088 out.go:177] * [newest-cni-014519] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 01:48:07.532614    8088 notify.go:220] Checking for updates...
	I1025 01:48:07.534605    8088 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:07.538614    8088 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 01:48:07.540598    8088 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 01:48:07.548601    8088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 01:48:07.551578    8088 config.go:180] Loaded profile config "newest-cni-014519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:07.552583    8088 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 01:48:07.842860    8088 docker.go:137] docker version: linux-20.10.17
	I1025 01:48:07.849852    8088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:48:08.427560    8088 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-10-25 01:48:08.034946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-p
lugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:48:08.431552    8088 out.go:177] * Using the docker driver based on existing profile
	I1025 01:48:08.432563    8088 start.go:282] selected driver: docker
	I1025 01:48:08.432563    8088 start.go:808] validating driver "docker" against &{Name:newest-cni-014519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-014519 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:48:08.433738    8088 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 01:48:08.502581    8088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:48:09.107205    8088 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-10-25 01:48:08.6702977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:48:09.108214    8088 start_flags.go:904] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 01:48:09.108214    8088 cni.go:95] Creating CNI manager for ""
	I1025 01:48:09.108214    8088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 01:48:09.108214    8088 start_flags.go:317] config:
	{Name:newest-cni-014519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-014519 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:48:09.114151    8088 out.go:177] * Starting control plane node newest-cni-014519 in cluster newest-cni-014519
	I1025 01:48:09.116765    8088 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 01:48:09.121035    8088 out.go:177] * Pulling base image ...
	I1025 01:48:09.123409    8088 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:48:09.124216    8088 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 01:48:09.124216    8088 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 01:48:09.124216    8088 cache.go:57] Caching tarball of preloaded images
	I1025 01:48:09.124216    8088 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 01:48:09.124216    8088 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 01:48:09.125224    8088 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\config.json ...
	I1025 01:48:09.421155    8088 image.go:86] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 01:48:09.421155    8088 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 01:48:09.421155    8088 cache.go:208] Successfully downloaded all kic artifacts
	I1025 01:48:09.421155    8088 start.go:364] acquiring machines lock for newest-cni-014519: {Name:mkcfcd28ce82156fffc70275d9ea18a1fe5a9203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:48:09.421155    8088 start.go:368] acquired machines lock for "newest-cni-014519" in 0s
	I1025 01:48:09.421155    8088 start.go:96] Skipping create...Using existing machine configuration
	I1025 01:48:09.421155    8088 fix.go:55] fixHost starting: 
	I1025 01:48:09.452397    8088 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:48:09.722143    8088 fix.go:103] recreateIfNeeded on newest-cni-014519: state=Stopped err=<nil>
	W1025 01:48:09.722143    8088 fix.go:129] unexpected machine state, will restart: <nil>
	I1025 01:48:09.728140    8088 out.go:177] * Restarting existing docker container for "newest-cni-014519" ...
	I1025 01:48:06.483404   10588 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:06.484094   10588 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50354 <nil> <nil>}
	I1025 01:48:06.484094   10588 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 01:48:07.988187   10588 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-09-08 23:09:37.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-25 01:48:06.214875000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 01:48:07.988187   10588 machine.go:91] provisioned docker machine in 4.5877919s
	I1025 01:48:07.988187   10588 client.go:171] LocalClient.Create took 36.9508501s
	I1025 01:48:07.988187   10588 start.go:167] duration metric: libmachine.API.Create for "calico-012958" took 36.9508501s
	I1025 01:48:07.988187   10588 start.go:300] post-start starting for "calico-012958" (driver="docker")
	I1025 01:48:07.988187   10588 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 01:48:08.010282   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 01:48:08.020998   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:08.222081   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:08.360102   10588 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 01:48:08.371546   10588 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 01:48:08.372590   10588 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 01:48:08.372590   10588 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 01:48:08.372590   10588 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 01:48:08.372590   10588 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I1025 01:48:08.372590   10588 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I1025 01:48:08.373586   10588 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem -> 42002.pem in /etc/ssl/certs
	I1025 01:48:08.390574   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 01:48:08.416569   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /etc/ssl/certs/42002.pem (1708 bytes)
	I1025 01:48:08.464548   10588 start.go:303] post-start completed in 476.3582ms
	I1025 01:48:08.474584   10588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-012958
	I1025 01:48:08.714587   10588 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\config.json ...
	I1025 01:48:08.731551   10588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:48:08.735547   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:08.963550   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:09.118216   10588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:48:09.132228   10588 start.go:128] duration metric: createHost completed in 38.1075755s
	I1025 01:48:09.132228   10588 start.go:83] releasing machines lock for "calico-012958", held for 38.1080283s
	I1025 01:48:09.145217   10588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-012958
	I1025 01:48:09.425142   10588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 01:48:09.447276   10588 ssh_runner.go:195] Run: systemctl --version
	I1025 01:48:09.448142   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:09.459590   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:09.694138   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:09.706168   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:09.834108   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 01:48:09.916106   10588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1025 01:48:09.964103   10588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:10.128598   10588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 01:48:10.343502   10588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 01:48:10.371644   10588 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 01:48:10.382640   10588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 01:48:10.416596   10588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 01:48:10.471935   10588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 01:48:10.724153   10588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 01:48:10.949266   10588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:11.180698   10588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 01:48:12.076461   10328 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1025 01:48:12.077103   10328 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 01:48:12.077103   10328 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 01:48:12.077103   10328 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 01:48:12.077103   10328 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 01:48:12.078103   10328 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 01:48:12.082100   10328 out.go:204]   - Generating certificates and keys ...
	I1025 01:48:12.082100   10328 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 01:48:12.082100   10328 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 01:48:12.082100   10328 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 01:48:12.083097   10328 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1025 01:48:12.083097   10328 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1025 01:48:12.083097   10328 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1025 01:48:12.083097   10328 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1025 01:48:12.083097   10328 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [auto-012955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 01:48:12.084100   10328 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1025 01:48:12.084100   10328 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [auto-012955 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 01:48:12.084100   10328 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 01:48:12.085121   10328 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 01:48:12.085121   10328 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1025 01:48:12.085121   10328 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 01:48:12.085121   10328 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 01:48:12.085121   10328 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 01:48:12.085121   10328 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 01:48:12.086090   10328 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 01:48:12.086090   10328 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 01:48:12.086090   10328 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 01:48:12.087079   10328 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 01:48:12.087079   10328 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 01:48:12.093104   10328 out.go:204]   - Booting up control plane ...
	I1025 01:48:12.094099   10328 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 01:48:12.094099   10328 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 01:48:12.094099   10328 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 01:48:12.094099   10328 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 01:48:12.095100   10328 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 01:48:12.095100   10328 kubeadm.go:317] [apiclient] All control plane components are healthy after 27.520632 seconds
	I1025 01:48:12.095100   10328 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 01:48:12.096098   10328 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 01:48:12.096098   10328 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 01:48:12.096098   10328 kubeadm.go:317] [mark-control-plane] Marking the node auto-012955 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 01:48:12.097100   10328 kubeadm.go:317] [bootstrap-token] Using token: 32hr85.wjjfr44bx6itpztf
	I1025 01:48:09.740131    8088 cli_runner.go:164] Run: docker start newest-cni-014519
	I1025 01:48:10.761153    8088 cli_runner.go:217] Completed: docker start newest-cni-014519: (1.0210155s)
	I1025 01:48:10.769164    8088 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:48:11.013202    8088 kic.go:415] container "newest-cni-014519" state is running.
	I1025 01:48:11.022189    8088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-014519
	I1025 01:48:11.249728    8088 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\config.json ...
	I1025 01:48:11.252713    8088 machine.go:88] provisioning docker machine ...
	I1025 01:48:11.252713    8088 ubuntu.go:169] provisioning hostname "newest-cni-014519"
	I1025 01:48:11.261717    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:11.526496    8088 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:11.527488    8088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50399 <nil> <nil>}
	I1025 01:48:11.527488    8088 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-014519 && echo "newest-cni-014519" | sudo tee /etc/hostname
	I1025 01:48:11.532502    8088 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 01:48:11.910142   10588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 01:48:12.114079   10588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:12.305100   10588 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 01:48:12.331098   10588 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 01:48:12.344290   10588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 01:48:12.357283   10588 start.go:472] Will wait 60s for crictl version
	I1025 01:48:12.371290   10588 ssh_runner.go:195] Run: sudo crictl version
	I1025 01:48:12.458084   10588 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 01:48:12.465481   10588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:48:12.564500   10588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:48:12.103086   10328 out.go:204]   - Configuring RBAC rules ...
	I1025 01:48:12.103086   10328 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 01:48:12.104089   10328 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 01:48:12.104089   10328 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 01:48:12.104089   10328 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 01:48:12.105101   10328 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 01:48:12.105101   10328 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 01:48:12.105101   10328 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 01:48:12.105101   10328 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1025 01:48:12.106099   10328 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1025 01:48:12.106099   10328 kubeadm.go:317] 
	I1025 01:48:12.106099   10328 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1025 01:48:12.106099   10328 kubeadm.go:317] 
	I1025 01:48:12.106099   10328 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1025 01:48:12.106099   10328 kubeadm.go:317] 
	I1025 01:48:12.106099   10328 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1025 01:48:12.106099   10328 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 01:48:12.106099   10328 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 01:48:12.106099   10328 kubeadm.go:317] 
	I1025 01:48:12.106099   10328 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1025 01:48:12.106099   10328 kubeadm.go:317] 
	I1025 01:48:12.107079   10328 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 01:48:12.107079   10328 kubeadm.go:317] 
	I1025 01:48:12.107079   10328 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1025 01:48:12.107079   10328 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 01:48:12.107079   10328 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 01:48:12.107079   10328 kubeadm.go:317] 
	I1025 01:48:12.107079   10328 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 01:48:12.107079   10328 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1025 01:48:12.108081   10328 kubeadm.go:317] 
	I1025 01:48:12.108081   10328 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 32hr85.wjjfr44bx6itpztf \
	I1025 01:48:12.108081   10328 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 \
	I1025 01:48:12.108081   10328 kubeadm.go:317] 	--control-plane 
	I1025 01:48:12.108081   10328 kubeadm.go:317] 
	I1025 01:48:12.108081   10328 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1025 01:48:12.108081   10328 kubeadm.go:317] 
	I1025 01:48:12.109080   10328 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 32hr85.wjjfr44bx6itpztf \
	I1025 01:48:12.109080   10328 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 
	I1025 01:48:12.109080   10328 cni.go:95] Creating CNI manager for ""
	I1025 01:48:12.109080   10328 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 01:48:12.109080   10328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 01:48:12.121076   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4 minikube.k8s.io/name=auto-012955 minikube.k8s.io/updated_at=2022_10_25T01_48_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:12.122080   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:12.194106   10328 ops.go:34] apiserver oom_adj: -16
	I1025 01:48:12.812874   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:14.027606   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:12.656558   10588 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 01:48:12.664569   10588 cli_runner.go:164] Run: docker exec -t calico-012958 dig +short host.docker.internal
	I1025 01:48:13.089876   10588 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 01:48:13.105897   10588 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 01:48:13.116880   10588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:48:13.150185   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:13.355841   10588 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:48:13.363459   10588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:48:13.423206   10588 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:48:13.423206   10588 docker.go:542] Images already preloaded, skipping extraction
	I1025 01:48:13.431181   10588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:48:13.500721   10588 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:48:13.500721   10588 cache_images.go:84] Images are preloaded, skipping loading
	I1025 01:48:13.509759   10588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 01:48:13.692825   10588 cni.go:95] Creating CNI manager for "calico"
	I1025 01:48:13.692825   10588 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 01:48:13.692825   10588 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-012958 NodeName:calico-012958 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 01:48:13.693521   10588 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-012958"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 01:48:13.693724   10588 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-012958 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1025 01:48:13.707327   10588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 01:48:13.734826   10588 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 01:48:13.745358   10588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 01:48:13.787561   10588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1025 01:48:13.832592   10588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 01:48:13.868456   10588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1025 01:48:13.923288   10588 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1025 01:48:13.933289   10588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:48:13.961312   10588 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958 for IP: 192.168.67.2
	I1025 01:48:13.961903   10588 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I1025 01:48:13.962152   10588 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I1025 01:48:13.962700   10588 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.key
	I1025 01:48:13.962778   10588 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.crt with IP's: []
	I1025 01:48:14.503607   10588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.crt ...
	I1025 01:48:14.503703   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.crt: {Name:mke0d8cb06416f502e6bcf65687095510450a54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.505541   10588 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.key ...
	I1025 01:48:14.505625   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\client.key: {Name:mk568a00cc73687479aeb8bbbf37e26ab417138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.506440   10588 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e
	I1025 01:48:14.506440   10588 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 01:48:14.723739   10588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e ...
	I1025 01:48:14.723739   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e: {Name:mk3c787c606463584820032d54fc0a7605009379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.724747   10588 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e ...
	I1025 01:48:14.724747   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e: {Name:mkcb9282ccf2a1f51186f962f8338e1502e42eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.725769   10588 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt
	I1025 01:48:14.732753   10588 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key
	I1025 01:48:14.733775   10588 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key
	I1025 01:48:14.734743   10588 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt with IP's: []
	I1025 01:48:14.899928   10588 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt ...
	I1025 01:48:14.899928   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt: {Name:mk1cb3c8e163d99a20ecc7a3178be6ed3576502c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.901499   10588 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key ...
	I1025 01:48:14.901499   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key: {Name:mka6f6bb8292025d453be0bbe5e383e9b983ccad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:14.911566   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem (1338 bytes)
	W1025 01:48:14.911566   10588 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200_empty.pem, impossibly tiny 0 bytes
	I1025 01:48:14.911566   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1025 01:48:14.912568   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1025 01:48:14.912568   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1025 01:48:14.912568   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1025 01:48:14.913569   10588 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem (1708 bytes)
	I1025 01:48:14.914572   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 01:48:14.968993   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 01:48:15.035988   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 01:48:15.085963   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-012958\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 01:48:15.140000   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 01:48:15.204346   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 01:48:15.252332   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 01:48:15.302328   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 01:48:15.351331   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 01:48:15.407330   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem --> /usr/share/ca-certificates/4200.pem (1338 bytes)
	I1025 01:48:15.456837   10588 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /usr/share/ca-certificates/42002.pem (1708 bytes)
	I1025 01:48:15.510446   10588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 01:48:15.565102   10588 ssh_runner.go:195] Run: openssl version
	I1025 01:48:15.606110   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42002.pem && ln -fs /usr/share/ca-certificates/42002.pem /etc/ssl/certs/42002.pem"
	I1025 01:48:15.646094   10588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42002.pem
	I1025 01:48:15.657106   10588 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 25 00:08 /usr/share/ca-certificates/42002.pem
	I1025 01:48:15.666080   10588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42002.pem
	I1025 01:48:15.694118   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42002.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 01:48:15.729107   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 01:48:15.766731   10588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:15.777749   10588 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 25 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:15.791762   10588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:15.815737   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 01:48:15.845732   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4200.pem && ln -fs /usr/share/ca-certificates/4200.pem /etc/ssl/certs/4200.pem"
	I1025 01:48:15.890734   10588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4200.pem
	I1025 01:48:15.902715   10588 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 25 00:08 /usr/share/ca-certificates/4200.pem
	I1025 01:48:15.912710   10588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4200.pem
	I1025 01:48:15.934710   10588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4200.pem /etc/ssl/certs/51391683.0"
	I1025 01:48:15.955720   10588 kubeadm.go:396] StartCluster: {Name:calico-012958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-012958 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:48:15.962760   10588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 01:48:16.040325   10588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 01:48:16.072326   10588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 01:48:16.096335   10588 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1025 01:48:16.106323   10588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 01:48:16.138322   10588 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 01:48:16.138322   10588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 01:48:16.233038   10588 kubeadm.go:317] W1025 01:48:16.230616    1227 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 01:48:16.319162   10588 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 01:48:14.773772    8088 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-014519
	
	I1025 01:48:14.786746    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:15.012977    8088 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:15.013973    8088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50399 <nil> <nil>}
	I1025 01:48:15.013973    8088 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-014519' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-014519/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-014519' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 01:48:15.205328    8088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:48:15.205328    8088 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I1025 01:48:15.205328    8088 ubuntu.go:177] setting up certificates
	I1025 01:48:15.205328    8088 provision.go:83] configureAuth start
	I1025 01:48:15.213330    8088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-014519
	I1025 01:48:15.403348    8088 provision.go:138] copyHostCerts
	I1025 01:48:15.403348    8088 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I1025 01:48:15.403348    8088 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I1025 01:48:15.404349    8088 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1025 01:48:15.405346    8088 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I1025 01:48:15.405346    8088 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I1025 01:48:15.406353    8088 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1025 01:48:15.407330    8088 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I1025 01:48:15.407330    8088 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I1025 01:48:15.407330    8088 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1675 bytes)
	I1025 01:48:15.408327    8088 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-014519 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-014519]
	I1025 01:48:15.971732    8088 provision.go:172] copyRemoteCerts
	I1025 01:48:15.986386    8088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 01:48:15.994395    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:16.192330    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:16.343585    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 01:48:16.392081    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 01:48:16.444913    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 01:48:16.505400    8088 provision.go:86] duration metric: configureAuth took 1.3000628s
	I1025 01:48:16.505400    8088 ubuntu.go:193] setting minikube options for container-runtime
	I1025 01:48:16.506397    8088 config.go:180] Loaded profile config "newest-cni-014519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:16.516402    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:16.748631    8088 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:16.749578    8088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50399 <nil> <nil>}
	I1025 01:48:16.749578    8088 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 01:48:16.949320    8088 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 01:48:16.949320    8088 ubuntu.go:71] root file system type: overlay
	I1025 01:48:16.949320    8088 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 01:48:16.956315    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:17.189071    8088 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:17.190079    8088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50399 <nil> <nil>}
	I1025 01:48:17.190079    8088 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 01:48:17.358102    8088 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 01:48:17.365097    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:14.519461   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:15.021973   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:15.514441   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:16.020692   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:16.520390   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:17.008579   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:17.509188   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:18.010250   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:18.508272   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:19.018579   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:16.505400   10588 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 01:48:17.565180    8088 main.go:134] libmachine: Using SSH client type: native
	I1025 01:48:17.565180    8088 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50399 <nil> <nil>}
	I1025 01:48:17.565180    8088 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 01:48:17.727630    8088 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:48:17.727630    8088 machine.go:91] provisioned docker machine in 6.4748718s
	I1025 01:48:17.727630    8088 start.go:300] post-start starting for "newest-cni-014519" (driver="docker")
	I1025 01:48:17.727630    8088 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 01:48:17.736622    8088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 01:48:17.743608    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:17.982735    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:18.126110    8088 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 01:48:18.136135    8088 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 01:48:18.136135    8088 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 01:48:18.136135    8088 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 01:48:18.136135    8088 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 01:48:18.136135    8088 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I1025 01:48:18.136135    8088 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I1025 01:48:18.137100    8088 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem -> 42002.pem in /etc/ssl/certs
	I1025 01:48:18.148121    8088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 01:48:18.173384    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /etc/ssl/certs/42002.pem (1708 bytes)
	I1025 01:48:18.224383    8088 start.go:303] post-start completed in 496.749ms
	I1025 01:48:18.234387    8088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:48:18.241378    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:18.451630    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:18.614754    8088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:48:18.638658    8088 fix.go:57] fixHost completed within 9.2174386s
	I1025 01:48:18.638822    8088 start.go:83] releasing machines lock for "newest-cni-014519", held for 9.2176024s
	I1025 01:48:18.645650    8088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-014519
	I1025 01:48:18.848570    8088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 01:48:18.856567    8088 ssh_runner.go:195] Run: systemctl --version
	I1025 01:48:18.857567    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:18.863564    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:19.069569    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:19.086571    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:19.173576    8088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 01:48:19.194582    8088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1025 01:48:19.258390    8088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:19.426584    8088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 01:48:19.637040    8088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 01:48:19.670914    8088 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 01:48:19.683751    8088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 01:48:19.711747    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 01:48:19.752749    8088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 01:48:19.927888    8088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 01:48:20.107881    8088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:20.300041    8088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 01:48:21.224174    8088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 01:48:21.411316    8088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:48:21.601654    8088 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 01:48:21.630185    8088 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 01:48:21.641505    8088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 01:48:21.656830    8088 start.go:472] Will wait 60s for crictl version
	I1025 01:48:21.669811    8088 ssh_runner.go:195] Run: sudo crictl version
	I1025 01:48:21.740510    8088 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 01:48:21.750516    8088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:48:21.828682    8088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:48:21.907610    8088 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 01:48:21.917609    8088 cli_runner.go:164] Run: docker exec -t newest-cni-014519 dig +short host.docker.internal
	I1025 01:48:22.335111    8088 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 01:48:22.344979    8088 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 01:48:22.361761    8088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:48:22.399954    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:22.636548    8088 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1025 01:48:19.523991   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:20.013453   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:20.518719   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:21.012796   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:21.506673   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:22.014081   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:22.521935   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:23.011522   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:23.514925   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:24.021054   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:24.516033   10328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:25.382417   10328 kubeadm.go:1067] duration metric: took 13.2732466s to wait for elevateKubeSystemPrivileges.
	I1025 01:48:25.382572   10328 kubeadm.go:398] StartCluster complete in 47.9783217s
	I1025 01:48:25.382709   10328 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:25.382910   10328 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:25.385694   10328 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:26.104061   10328 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-012955" rescaled to 1
	I1025 01:48:26.104061   10328 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:48:26.109333   10328 out.go:177] * Verifying Kubernetes components...
	I1025 01:48:26.104061   10328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 01:48:26.104061   10328 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1025 01:48:26.106019   10328 config.go:180] Loaded profile config "auto-012955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:26.109746   10328 addons.go:65] Setting default-storageclass=true in profile "auto-012955"
	I1025 01:48:26.113753   10328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-012955"
	I1025 01:48:26.109746   10328 addons.go:65] Setting storage-provisioner=true in profile "auto-012955"
	I1025 01:48:26.113753   10328 addons.go:153] Setting addon storage-provisioner=true in "auto-012955"
	W1025 01:48:26.113753   10328 addons.go:162] addon storage-provisioner should already be in state true
	I1025 01:48:26.113753   10328 host.go:66] Checking if "auto-012955" exists ...
	I1025 01:48:26.129528   10328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:48:26.136528   10328 cli_runner.go:164] Run: docker container inspect auto-012955 --format={{.State.Status}}
	I1025 01:48:26.139526   10328 cli_runner.go:164] Run: docker container inspect auto-012955 --format={{.State.Status}}
	I1025 01:48:26.392722   10328 addons.go:153] Setting addon default-storageclass=true in "auto-012955"
	W1025 01:48:26.393740   10328 addons.go:162] addon default-storageclass should already be in state true
	I1025 01:48:26.393740   10328 host.go:66] Checking if "auto-012955" exists ...
	I1025 01:48:26.397733   10328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 01:48:27.023621    4244 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1025 01:48:27.023621    4244 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 01:48:27.023621    4244 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 01:48:27.024622    4244 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 01:48:27.024622    4244 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 01:48:27.024622    4244 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 01:48:27.027604    4244 out.go:204]   - Generating certificates and keys ...
	I1025 01:48:27.027604    4244 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 01:48:27.027604    4244 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1025 01:48:27.028603    4244 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cilium-012958 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cilium-012958 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1025 01:48:27.029608    4244 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 01:48:27.029608    4244 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 01:48:27.030604    4244 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 01:48:27.030604    4244 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 01:48:27.033614    4244 out.go:204]   - Booting up control plane ...
	I1025 01:48:27.033614    4244 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 01:48:27.033614    4244 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 01:48:27.033614    4244 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 01:48:27.033614    4244 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 01:48:27.034603    4244 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 01:48:27.034603    4244 kubeadm.go:317] [apiclient] All control plane components are healthy after 18.014737 seconds
	I1025 01:48:27.034603    4244 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 01:48:27.034603    4244 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 01:48:27.034603    4244 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 01:48:27.035604    4244 kubeadm.go:317] [mark-control-plane] Marking the node cilium-012958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 01:48:27.035604    4244 kubeadm.go:317] [bootstrap-token] Using token: 6cqtb7.jlzs1c3oqwmk1k6o
	I1025 01:48:27.038602    4244 out.go:204]   - Configuring RBAC rules ...
	I1025 01:48:27.038602    4244 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 01:48:27.038602    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 01:48:27.038602    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 01:48:27.039612    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 01:48:27.039612    4244 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 01:48:27.039612    4244 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 01:48:27.039612    4244 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 01:48:27.039612    4244 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1025 01:48:27.040602    4244 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.040602    4244 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.040602    4244 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.040602    4244 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1025 01:48:27.040602    4244 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 01:48:27.040602    4244 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 01:48:27.040602    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1025 01:48:27.041606    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 01:48:27.041606    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1025 01:48:27.041606    4244 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 01:48:27.041606    4244 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 01:48:27.041606    4244 kubeadm.go:317] 
	I1025 01:48:27.041606    4244 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 01:48:27.041606    4244 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1025 01:48:27.042604    4244 kubeadm.go:317] 
	I1025 01:48:27.042604    4244 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 6cqtb7.jlzs1c3oqwmk1k6o \
	I1025 01:48:27.042604    4244 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 \
	I1025 01:48:27.042604    4244 kubeadm.go:317] 	--control-plane 
	I1025 01:48:27.042604    4244 kubeadm.go:317] 
	I1025 01:48:27.042604    4244 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1025 01:48:27.042604    4244 kubeadm.go:317] 
	I1025 01:48:27.042604    4244 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 6cqtb7.jlzs1c3oqwmk1k6o \
	I1025 01:48:27.043954    4244 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 
	I1025 01:48:27.043954    4244 cni.go:95] Creating CNI manager for "cilium"
	I1025 01:48:27.044860    4244 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I1025 01:48:22.638543    8088 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:48:22.645243    8088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:48:22.713894    8088 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:48:22.713894    8088 docker.go:542] Images already preloaded, skipping extraction
	I1025 01:48:22.723548    8088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:48:22.782520    8088 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:48:22.782520    8088 cache_images.go:84] Images are preloaded, skipping loading
	I1025 01:48:22.788558    8088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 01:48:23.019516    8088 cni.go:95] Creating CNI manager for ""
	I1025 01:48:23.019516    8088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 01:48:23.019516    8088 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1025 01:48:23.019516    8088 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-014519 NodeName:newest-cni-014519 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 01:48:23.020525    8088 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-014519"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 01:48:23.020525    8088 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-014519 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-014519 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 01:48:23.037525    8088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 01:48:23.064525    8088 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 01:48:23.076528    8088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 01:48:23.104569    8088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (514 bytes)
	I1025 01:48:23.148778    8088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 01:48:23.186748    8088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I1025 01:48:23.237501    8088 ssh_runner.go:195] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	I1025 01:48:23.248489    8088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:48:23.271692    8088 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519 for IP: 172.17.0.2
	I1025 01:48:23.272246    8088 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I1025 01:48:23.272704    8088 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I1025 01:48:23.273636    8088 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\client.key
	I1025 01:48:23.274126    8088 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\apiserver.key.7b749c5f
	I1025 01:48:23.274618    8088 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\proxy-client.key
	I1025 01:48:23.277446    8088 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem (1338 bytes)
	W1025 01:48:23.277618    8088 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200_empty.pem, impossibly tiny 0 bytes
	I1025 01:48:23.277618    8088 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1025 01:48:23.278659    8088 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1025 01:48:23.279176    8088 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1025 01:48:23.279638    8088 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1025 01:48:23.280566    8088 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem (1708 bytes)
	I1025 01:48:23.283361    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 01:48:23.344470    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 01:48:23.414902    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 01:48:23.465934    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\newest-cni-014519\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 01:48:23.523932    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 01:48:23.585772    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 01:48:23.645429    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 01:48:23.711424    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 01:48:23.764929    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 01:48:23.824540    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem --> /usr/share/ca-certificates/4200.pem (1338 bytes)
	I1025 01:48:23.868540    8088 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /usr/share/ca-certificates/42002.pem (1708 bytes)
	I1025 01:48:23.928007    8088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 01:48:23.994059    8088 ssh_runner.go:195] Run: openssl version
	I1025 01:48:24.022081    8088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4200.pem && ln -fs /usr/share/ca-certificates/4200.pem /etc/ssl/certs/4200.pem"
	I1025 01:48:24.054638    8088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4200.pem
	I1025 01:48:24.063627    8088 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 25 00:08 /usr/share/ca-certificates/4200.pem
	I1025 01:48:24.073636    8088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4200.pem
	I1025 01:48:24.110645    8088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4200.pem /etc/ssl/certs/51391683.0"
	I1025 01:48:24.147175    8088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42002.pem && ln -fs /usr/share/ca-certificates/42002.pem /etc/ssl/certs/42002.pem"
	I1025 01:48:24.188754    8088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42002.pem
	I1025 01:48:24.202755    8088 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 25 00:08 /usr/share/ca-certificates/42002.pem
	I1025 01:48:24.213732    8088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42002.pem
	I1025 01:48:24.239756    8088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42002.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 01:48:24.290760    8088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 01:48:24.336015    8088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:24.351011    8088 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 25 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:24.362021    8088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:48:24.400018    8088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 01:48:24.442009    8088 kubeadm.go:396] StartCluster: {Name:newest-cni-014519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-014519 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:48:24.454020    8088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 01:48:24.536046    8088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 01:48:24.562054    8088 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1025 01:48:24.562054    8088 kubeadm.go:627] restartCluster start
	I1025 01:48:24.572028    8088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 01:48:24.601044    8088 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:24.615030    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:24.830347    8088 kubeconfig.go:135] verify returned: extract IP: "newest-cni-014519" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:24.830347    8088 kubeconfig.go:146] "newest-cni-014519" context is missing from C:\Users\jenkins.minikube8\minikube-integration\kubeconfig - will repair!
	I1025 01:48:24.830347    8088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:24.862312    8088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 01:48:24.888682    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:24.905691    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:24.932014    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:25.139894    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:25.152818    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:25.201396    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:25.342700    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:25.353863    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:25.397341    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:25.543794    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:25.554935    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:25.589331    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:25.746538    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:25.756986    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:25.785931    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:25.936519    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:25.947355    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:25.981286    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:26.138521    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:26.150516    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:26.179509    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:26.347763    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:26.362714    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:26.399718    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:26.538192    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:26.553983    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:26.580617    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:26.742323    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:26.752358    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:26.812293    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:26.945722    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:26.957610    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:26.983644    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:27.136251    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:27.153483    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:27.198281    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:27.339699    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:27.348745    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:27.383917    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:26.400708   10328 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:26.400708   10328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 01:48:26.409711   10328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-012955
	I1025 01:48:26.412723   10328 cli_runner.go:164] Run: docker container inspect auto-012955 --format={{.State.Status}}
	I1025 01:48:26.616624   10328 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:26.616624   10328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 01:48:26.623621   10328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-012955
	I1025 01:48:26.632614   10328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50276 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\auto-012955\id_rsa Username:docker}
	I1025 01:48:26.786940   10328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 01:48:26.805228   10328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-012955
	I1025 01:48:26.867549   10328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50276 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\auto-012955\id_rsa Username:docker}
	I1025 01:48:27.060210   10328 node_ready.go:35] waiting up to 5m0s for node "auto-012955" to be "Ready" ...
	I1025 01:48:27.089107   10328 node_ready.go:49] node "auto-012955" has status "Ready":"True"
	I1025 01:48:27.089107   10328 node_ready.go:38] duration metric: took 28.897ms waiting for node "auto-012955" to be "Ready" ...
	I1025 01:48:27.089329   10328 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:48:27.194274   10328 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-57hhw" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:27.414487   10328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:27.606402   10328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:27.060089    4244 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I1025 01:48:27.201794    4244 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I1025 01:48:27.201853    4244 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I1025 01:48:27.201940    4244 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I1025 01:48:27.202088    4244 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1025 01:48:27.202088    4244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I1025 01:48:27.324697    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 01:48:29.836466    4244 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.5117535s)
	I1025 01:48:29.836466    4244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 01:48:29.847475    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4 minikube.k8s.io/name=cilium-012958 minikube.k8s.io/updated_at=2022_10_25T01_48_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:29.847475    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:29.853478    4244 ops.go:34] apiserver oom_adj: -16
	I1025 01:48:30.142793    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:29.383241   10328 pod_ready.go:102] pod "coredns-565d847f94-57hhw" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:31.488915   10328 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.7018587s)
	I1025 01:48:31.488966   10328 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1025 01:48:31.587898   10328 pod_ready.go:102] pod "coredns-565d847f94-57hhw" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:31.911525   10328 pod_ready.go:92] pod "coredns-565d847f94-57hhw" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:31.911525   10328 pod_ready.go:81] duration metric: took 4.7172227s waiting for pod "coredns-565d847f94-57hhw" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:31.911525   10328 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-v4zjt" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:31.981342   10328 pod_ready.go:92] pod "coredns-565d847f94-v4zjt" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:31.981470   10328 pod_ready.go:81] duration metric: took 69.944ms waiting for pod "coredns-565d847f94-v4zjt" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:31.981470   10328 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.003835   10328 pod_ready.go:92] pod "etcd-auto-012955" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:32.003835   10328 pod_ready.go:81] duration metric: took 22.3651ms waiting for pod "etcd-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.003835   10328 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.055962   10328 pod_ready.go:92] pod "kube-apiserver-auto-012955" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:32.056957   10328 pod_ready.go:81] duration metric: took 53.1216ms waiting for pod "kube-apiserver-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.056957   10328 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.095090   10328 pod_ready.go:92] pod "kube-controller-manager-auto-012955" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:32.095090   10328 pod_ready.go:81] duration metric: took 38.1327ms waiting for pod "kube-controller-manager-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.095090   10328 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-vdjf8" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.105085   10328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.4986563s)
	I1025 01:48:32.105085   10328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.69057s)
	I1025 01:48:32.111086   10328 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1025 01:48:27.542388    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:27.552396    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:27.580951    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:27.733308    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:27.743869    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:27.774428    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:27.936541    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:27.946535    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:27.975222    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:27.975222    8088 api_server.go:165] Checking apiserver status ...
	I1025 01:48:27.995214    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 01:48:28.022202    8088 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:28.022202    8088 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1025 01:48:28.022202    8088 kubeadm.go:1114] stopping kube-system containers ...
	I1025 01:48:28.029193    8088 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 01:48:28.105722    8088 docker.go:443] Stopping containers: [3623a09cfb96 04c12b3febe9 8c32d977d6a4 e4edba2e7556 be757359bd1e 269e78942afe 76af3fc0ba53 97497f2af52a 7ff17fe11390 669ed1999ddd d543071ca09f 028ff076b356 08c2e2824d68 6199efa3639a 4174f19e3463 dc246491ed5a]
	I1025 01:48:28.120557    8088 ssh_runner.go:195] Run: docker stop 3623a09cfb96 04c12b3febe9 8c32d977d6a4 e4edba2e7556 be757359bd1e 269e78942afe 76af3fc0ba53 97497f2af52a 7ff17fe11390 669ed1999ddd d543071ca09f 028ff076b356 08c2e2824d68 6199efa3639a 4174f19e3463 dc246491ed5a
	I1025 01:48:28.213818    8088 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 01:48:28.320628    8088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 01:48:28.341842    8088 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct 25 01:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Oct 25 01:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 25 01:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5598 Oct 25 01:46 /etc/kubernetes/scheduler.conf
	
	I1025 01:48:28.358334    8088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 01:48:28.416239    8088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 01:48:28.446213    8088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 01:48:28.465225    8088 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:28.474226    8088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 01:48:28.517036    8088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 01:48:28.543865    8088 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 01:48:28.559732    8088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 01:48:28.620016    8088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 01:48:28.697892    8088 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 01:48:28.697892    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:48:28.925780    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:48:30.359078    8088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4332895s)
	I1025 01:48:30.359078    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:48:30.703545    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:48:30.918872    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:48:31.212545    8088 api_server.go:51] waiting for apiserver process to appear ...
	I1025 01:48:31.224517    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:31.841788    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:32.342847    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:32.113092   10328 addons.go:414] enableAddons completed in 6.0089941s
	I1025 01:48:32.226875   10328 pod_ready.go:92] pod "kube-proxy-vdjf8" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:32.226875   10328 pod_ready.go:81] duration metric: took 131.7845ms waiting for pod "kube-proxy-vdjf8" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.226875   10328 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.621712   10328 pod_ready.go:92] pod "kube-scheduler-auto-012955" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:32.621712   10328 pod_ready.go:81] duration metric: took 394.8344ms waiting for pod "kube-scheduler-auto-012955" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:32.622268   10328 pod_ready.go:38] duration metric: took 5.5323495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:48:32.622268   10328 api_server.go:51] waiting for apiserver process to appear ...
	I1025 01:48:32.636698   10328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:32.681712   10328 api_server.go:71] duration metric: took 6.5776107s to wait for apiserver process to appear ...
	I1025 01:48:32.681712   10328 api_server.go:87] waiting for apiserver healthz status ...
	I1025 01:48:32.681712   10328 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50280/healthz ...
	I1025 01:48:32.706548   10328 api_server.go:278] https://127.0.0.1:50280/healthz returned 200:
	ok
	I1025 01:48:32.711427   10328 api_server.go:140] control plane version: v1.25.3
	I1025 01:48:32.711427   10328 api_server.go:130] duration metric: took 29.7156ms to wait for apiserver health ...
	I1025 01:48:32.711427   10328 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 01:48:32.839302   10328 system_pods.go:59] 8 kube-system pods found
	I1025 01:48:32.839383   10328 system_pods.go:61] "coredns-565d847f94-57hhw" [ca9e9930-c89a-4c65-9c1e-015354d05b56] Running
	I1025 01:48:32.839383   10328 system_pods.go:61] "coredns-565d847f94-v4zjt" [83af4bb7-6e2c-4a5f-b5bf-e697f36b0f1a] Running
	I1025 01:48:32.839383   10328 system_pods.go:61] "etcd-auto-012955" [d1075f61-678f-4033-8e78-6bdf69f7f320] Running
	I1025 01:48:32.839383   10328 system_pods.go:61] "kube-apiserver-auto-012955" [8d342b33-54fb-4c1a-b493-f4eca1b47dac] Running
	I1025 01:48:32.839383   10328 system_pods.go:61] "kube-controller-manager-auto-012955" [6e082a81-0a03-4ffb-9bf6-87286d205c02] Running
	I1025 01:48:32.839383   10328 system_pods.go:61] "kube-proxy-vdjf8" [147fb870-a480-4331-b13a-61cfd8723090] Running
	I1025 01:48:32.839383   10328 system_pods.go:61] "kube-scheduler-auto-012955" [878e0074-a51d-4c10-aeef-b72407c25fc7] Running
	I1025 01:48:32.839383   10328 system_pods.go:61] "storage-provisioner" [d207ceca-663d-4767-8099-ae4c66eb9471] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 01:48:32.839383   10328 system_pods.go:74] duration metric: took 127.9547ms to wait for pod list to return data ...
	I1025 01:48:32.839383   10328 default_sa.go:34] waiting for default service account to be created ...
	I1025 01:48:33.023854   10328 default_sa.go:45] found service account: "default"
	I1025 01:48:33.023945   10328 default_sa.go:55] duration metric: took 184.5615ms for default service account to be created ...
	I1025 01:48:33.023945   10328 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 01:48:33.231539   10328 system_pods.go:86] 8 kube-system pods found
	I1025 01:48:33.231539   10328 system_pods.go:89] "coredns-565d847f94-57hhw" [ca9e9930-c89a-4c65-9c1e-015354d05b56] Running
	I1025 01:48:33.231539   10328 system_pods.go:89] "coredns-565d847f94-v4zjt" [83af4bb7-6e2c-4a5f-b5bf-e697f36b0f1a] Running
	I1025 01:48:33.231539   10328 system_pods.go:89] "etcd-auto-012955" [d1075f61-678f-4033-8e78-6bdf69f7f320] Running
	I1025 01:48:33.231539   10328 system_pods.go:89] "kube-apiserver-auto-012955" [8d342b33-54fb-4c1a-b493-f4eca1b47dac] Running
	I1025 01:48:33.231539   10328 system_pods.go:89] "kube-controller-manager-auto-012955" [6e082a81-0a03-4ffb-9bf6-87286d205c02] Running
	I1025 01:48:33.231539   10328 system_pods.go:89] "kube-proxy-vdjf8" [147fb870-a480-4331-b13a-61cfd8723090] Running
	I1025 01:48:33.231539   10328 system_pods.go:89] "kube-scheduler-auto-012955" [878e0074-a51d-4c10-aeef-b72407c25fc7] Running
	I1025 01:48:33.231539   10328 system_pods.go:89] "storage-provisioner" [d207ceca-663d-4767-8099-ae4c66eb9471] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 01:48:33.231539   10328 system_pods.go:126] duration metric: took 207.5928ms to wait for k8s-apps to be running ...
	I1025 01:48:33.231539   10328 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 01:48:33.247046   10328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:48:33.295796   10328 system_svc.go:56] duration metric: took 64.2557ms WaitForService to wait for kubelet.
	I1025 01:48:33.295796   10328 kubeadm.go:573] duration metric: took 7.191691s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 01:48:33.295796   10328 node_conditions.go:102] verifying NodePressure condition ...
	I1025 01:48:33.427655   10328 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1025 01:48:33.427655   10328 node_conditions.go:123] node cpu capacity is 16
	I1025 01:48:33.427655   10328 node_conditions.go:105] duration metric: took 131.8586ms to run NodePressure ...
	I1025 01:48:33.427655   10328 start.go:217] waiting for startup goroutines ...
	I1025 01:48:33.440645   10328 ssh_runner.go:195] Run: rm -f paused
	I1025 01:48:33.700176   10328 start.go:506] kubectl: 1.18.2, cluster: 1.25.3 (minor skew: 7)
	I1025 01:48:33.703249   10328 out.go:177] 
	W1025 01:48:33.709259   10328 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.25.3.
	I1025 01:48:33.717247   10328 out.go:177]   - Want kubectl v1.25.3? Try 'minikube kubectl -- get pods -A'
	I1025 01:48:33.726729   10328 out.go:177] * Done! kubectl is now configured to use "auto-012955" cluster and "default" namespace by default
	I1025 01:48:30.812289    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:31.317546    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:31.812819    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:32.316838    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:32.815586    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:33.306794    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:33.817377    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:34.308283    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:34.808780    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:35.303490    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:32.842875    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:33.331782    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:33.837935    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:34.338135    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:34.480401    8088 api_server.go:71] duration metric: took 3.2678358s to wait for apiserver process to appear ...
	I1025 01:48:34.480401    8088 api_server.go:87] waiting for apiserver healthz status ...
	I1025 01:48:34.480401    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:34.498646    8088 api_server.go:268] stopped: https://127.0.0.1:50398/healthz: Get "https://127.0.0.1:50398/healthz": EOF
	I1025 01:48:35.003429    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:35.809521    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:36.308799    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:36.810943    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:37.302851    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:37.805286    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:38.313891    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:38.815392    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:39.312494    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:39.811389    4244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:40.215441    4244 kubeadm.go:1067] duration metric: took 10.3789117s to wait for elevateKubeSystemPrivileges.
	I1025 01:48:40.215441    4244 kubeadm.go:398] StartCluster complete in 39.6441597s
	I1025 01:48:40.215441    4244 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:40.215441    4244 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:40.220468    4244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:40.820498    4244 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-012958" rescaled to 1
	I1025 01:48:40.820498    4244 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:48:40.820498    4244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 01:48:40.823494    4244 out.go:177] * Verifying Kubernetes components...
	I1025 01:48:40.820498    4244 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1025 01:48:40.821469    4244 config.go:180] Loaded profile config "cilium-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:40.827539    4244 addons.go:65] Setting storage-provisioner=true in profile "cilium-012958"
	I1025 01:48:40.827539    4244 addons.go:153] Setting addon storage-provisioner=true in "cilium-012958"
	W1025 01:48:40.827539    4244 addons.go:162] addon storage-provisioner should already be in state true
	I1025 01:48:40.827539    4244 addons.go:65] Setting default-storageclass=true in profile "cilium-012958"
	I1025 01:48:40.827539    4244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-012958"
	I1025 01:48:40.827539    4244 host.go:66] Checking if "cilium-012958" exists ...
	I1025 01:48:40.849472    4244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:48:40.857483    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:48:40.859534    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:48:41.157487    4244 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 01:48:40.014332    8088 api_server.go:268] stopped: https://127.0.0.1:50398/healthz: Get "https://127.0.0.1:50398/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 01:48:40.508563    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:43.379547   10588 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1025 01:48:43.379547   10588 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 01:48:43.380104   10588 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 01:48:43.380344   10588 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 01:48:43.380344   10588 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 01:48:43.380691   10588 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 01:48:43.386367   10588 out.go:204]   - Generating certificates and keys ...
	I1025 01:48:43.386778   10588 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 01:48:43.386861   10588 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 01:48:43.387019   10588 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 01:48:43.387272   10588 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1025 01:48:43.387272   10588 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1025 01:48:43.387571   10588 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1025 01:48:43.387571   10588 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1025 01:48:43.388188   10588 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-012958 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 01:48:43.388188   10588 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1025 01:48:43.388188   10588 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-012958 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 01:48:43.388730   10588 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 01:48:43.388906   10588 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 01:48:43.388906   10588 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1025 01:48:43.388906   10588 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 01:48:43.388906   10588 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 01:48:43.389881   10588 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 01:48:43.389881   10588 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 01:48:43.389881   10588 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 01:48:43.390625   10588 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 01:48:43.390625   10588 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 01:48:43.390625   10588 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 01:48:43.390625   10588 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 01:48:43.393631   10588 out.go:204]   - Booting up control plane ...
	I1025 01:48:43.393631   10588 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 01:48:43.393631   10588 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 01:48:43.393631   10588 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 01:48:43.394631   10588 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 01:48:43.394631   10588 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 01:48:43.394631   10588 kubeadm.go:317] [apiclient] All control plane components are healthy after 20.006498 seconds
	I1025 01:48:43.395636   10588 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 01:48:43.395636   10588 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 01:48:43.395636   10588 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 01:48:43.396625   10588 kubeadm.go:317] [mark-control-plane] Marking the node calico-012958 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 01:48:43.396625   10588 kubeadm.go:317] [bootstrap-token] Using token: kcm1je.niaeugxnay31jj1b
	I1025 01:48:43.399624   10588 out.go:204]   - Configuring RBAC rules ...
	I1025 01:48:43.399624   10588 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 01:48:43.399624   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 01:48:43.399624   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 01:48:43.400602   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 01:48:43.401440   10588 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 01:48:43.401729   10588 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 01:48:43.401998   10588 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 01:48:43.402191   10588 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1025 01:48:43.402191   10588 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1025 01:48:43.402191   10588 kubeadm.go:317] 
	I1025 01:48:43.402895   10588 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1025 01:48:43.402969   10588 kubeadm.go:317] 
	I1025 01:48:43.403045   10588 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1025 01:48:43.403045   10588 kubeadm.go:317] 
	I1025 01:48:43.403045   10588 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1025 01:48:43.403045   10588 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 01:48:43.403045   10588 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 01:48:43.403606   10588 kubeadm.go:317] 
	I1025 01:48:43.403668   10588 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1025 01:48:43.403668   10588 kubeadm.go:317] 
	I1025 01:48:43.403846   10588 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 01:48:43.403846   10588 kubeadm.go:317] 
	I1025 01:48:43.403846   10588 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1025 01:48:43.403846   10588 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 01:48:43.403846   10588 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 01:48:43.403846   10588 kubeadm.go:317] 
	I1025 01:48:43.403846   10588 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 01:48:43.404679   10588 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1025 01:48:43.404679   10588 kubeadm.go:317] 
	I1025 01:48:43.404679   10588 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token kcm1je.niaeugxnay31jj1b \
	I1025 01:48:43.404679   10588 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 \
	I1025 01:48:43.404679   10588 kubeadm.go:317] 	--control-plane 
	I1025 01:48:43.404679   10588 kubeadm.go:317] 
	I1025 01:48:43.405671   10588 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1025 01:48:43.405671   10588 kubeadm.go:317] 
	I1025 01:48:43.405671   10588 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token kcm1je.niaeugxnay31jj1b \
	I1025 01:48:43.405671   10588 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 
	I1025 01:48:43.405671   10588 cni.go:95] Creating CNI manager for "calico"
	I1025 01:48:43.412678   10588 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1025 01:48:41.158482    4244 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:41.159498    4244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 01:48:41.166488    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:48:41.194495    4244 addons.go:153] Setting addon default-storageclass=true in "cilium-012958"
	W1025 01:48:41.194495    4244 addons.go:162] addon default-storageclass should already be in state true
	I1025 01:48:41.194495    4244 host.go:66] Checking if "cilium-012958" exists ...
	I1025 01:48:41.224500    4244 cli_runner.go:164] Run: docker container inspect cilium-012958 --format={{.State.Status}}
	I1025 01:48:41.438483    4244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 01:48:41.447480    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:48:41.452484    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:48:41.508502    4244 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:41.508502    4244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 01:48:41.526503    4244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-012958
	I1025 01:48:41.760497    4244 node_ready.go:35] waiting up to 5m0s for node "cilium-012958" to be "Ready" ...
	I1025 01:48:41.788494    4244 node_ready.go:49] node "cilium-012958" has status "Ready":"True"
	I1025 01:48:41.788494    4244 node_ready.go:38] duration metric: took 27.9963ms waiting for node "cilium-012958" to be "Ready" ...
	I1025 01:48:41.788494    4244 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:48:41.815814    4244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50301 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-012958\id_rsa Username:docker}
	I1025 01:48:41.889695    4244 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-656749584-pwt27" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:42.016736    4244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:42.313875    4244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:42.619481    4244 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.1809893s)
	I1025 01:48:42.619481    4244 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1025 01:48:43.428677    4244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.411931s)
	I1025 01:48:43.428677    4244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1147944s)
	I1025 01:48:43.431683    4244 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 01:48:43.435674    4244 addons.go:414] enableAddons completed in 2.6151575s
	I1025 01:48:44.035993    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:43.416682   10588 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1025 01:48:43.416682   10588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I1025 01:48:43.700866   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 01:48:43.437664    8088 api_server.go:278] https://127.0.0.1:50398/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 01:48:43.437664    8088 api_server.go:102] status: https://127.0.0.1:50398/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 01:48:43.509465    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:43.588082    8088 api_server.go:278] https://127.0.0.1:50398/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1025 01:48:43.588082    8088 api_server.go:102] status: https://127.0.0.1:50398/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1025 01:48:44.013476    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:44.036916    8088 api_server.go:278] https://127.0.0.1:50398/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 01:48:44.036916    8088 api_server.go:102] status: https://127.0.0.1:50398/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 01:48:44.509699    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:44.528677    8088 api_server.go:278] https://127.0.0.1:50398/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 01:48:44.528677    8088 api_server.go:102] status: https://127.0.0.1:50398/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 01:48:45.002777    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:45.034229    8088 api_server.go:278] https://127.0.0.1:50398/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 01:48:45.034229    8088 api_server.go:102] status: https://127.0.0.1:50398/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 01:48:45.507812    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:45.579944    8088 api_server.go:278] https://127.0.0.1:50398/healthz returned 200:
	ok
	I1025 01:48:45.604917    8088 api_server.go:140] control plane version: v1.25.3
	I1025 01:48:45.604917    8088 api_server.go:130] duration metric: took 11.124443s to wait for apiserver health ...
	I1025 01:48:45.604917    8088 cni.go:95] Creating CNI manager for ""
	I1025 01:48:45.604917    8088 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1025 01:48:45.604917    8088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 01:48:45.636917    8088 system_pods.go:59] 8 kube-system pods found
	I1025 01:48:45.636917    8088 system_pods.go:61] "coredns-565d847f94-48b4v" [cfcce53d-c202-4d51-91e2-4504a0b7ab56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 01:48:45.636917    8088 system_pods.go:61] "etcd-newest-cni-014519" [2620db9c-d27d-4aaf-a06f-b2376f4cb8db] Running
	I1025 01:48:45.636917    8088 system_pods.go:61] "kube-apiserver-newest-cni-014519" [7b0d26ec-3d33-4c1f-af7a-f6a75fe6020d] Running
	I1025 01:48:45.636917    8088 system_pods.go:61] "kube-controller-manager-newest-cni-014519" [402c53bb-87b0-41b1-acad-7c78a38f3b7a] Running
	I1025 01:48:45.636917    8088 system_pods.go:61] "kube-proxy-f8b8r" [61f20ee0-d1f8-408e-aae2-125f4d7acfb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 01:48:45.636917    8088 system_pods.go:61] "kube-scheduler-newest-cni-014519" [f03a6adc-b04a-487a-a830-c3e40a0ca92a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 01:48:45.636917    8088 system_pods.go:61] "metrics-server-5c8fd5cf8-njrkj" [48404644-efee-441b-bc99-d99ffa5d6aa6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 01:48:45.636917    8088 system_pods.go:61] "storage-provisioner" [fbb47050-0e3d-4647-8a80-81a23bf0b1f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 01:48:45.636917    8088 system_pods.go:74] duration metric: took 32ms to wait for pod list to return data ...
	I1025 01:48:45.636917    8088 node_conditions.go:102] verifying NodePressure condition ...
	I1025 01:48:45.750141    8088 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1025 01:48:45.750141    8088 node_conditions.go:123] node cpu capacity is 16
	I1025 01:48:45.750141    8088 node_conditions.go:105] duration metric: took 113.2237ms to run NodePressure ...
	I1025 01:48:45.750141    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 01:48:47.807916    8088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.0577599s)
	I1025 01:48:47.807916    8088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 01:48:47.882106    8088 ops.go:34] apiserver oom_adj: -16
	I1025 01:48:47.882106    8088 kubeadm.go:631] restartCluster took 23.3199034s
	I1025 01:48:47.882106    8088 kubeadm.go:398] StartCluster complete in 23.439948s
	I1025 01:48:47.882106    8088 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:47.882106    8088 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:47.885085    8088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:47.940104    8088 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-014519" rescaled to 1
	I1025 01:48:47.940104    8088 start.go:212] Will wait 6m0s for node &{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:48:47.946087    8088 out.go:177] * Verifying Kubernetes components...
	I1025 01:48:47.940104    8088 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1025 01:48:47.940104    8088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 01:48:47.941095    8088 config.go:180] Loaded profile config "newest-cni-014519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:47.946087    8088 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-014519"
	I1025 01:48:47.946087    8088 addons.go:65] Setting default-storageclass=true in profile "newest-cni-014519"
	I1025 01:48:47.946087    8088 addons.go:65] Setting metrics-server=true in profile "newest-cni-014519"
	I1025 01:48:47.949081    8088 addons.go:153] Setting addon metrics-server=true in "newest-cni-014519"
	I1025 01:48:47.949081    8088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-014519"
	I1025 01:48:47.949081    8088 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-014519"
	I1025 01:48:47.946087    8088 addons.go:65] Setting dashboard=true in profile "newest-cni-014519"
	W1025 01:48:47.949081    8088 addons.go:162] addon storage-provisioner should already be in state true
	I1025 01:48:47.949081    8088 addons.go:153] Setting addon dashboard=true in "newest-cni-014519"
	W1025 01:48:47.949081    8088 addons.go:162] addon dashboard should already be in state true
	I1025 01:48:47.949081    8088 host.go:66] Checking if "newest-cni-014519" exists ...
	W1025 01:48:47.949081    8088 addons.go:162] addon metrics-server should already be in state true
	I1025 01:48:47.949081    8088 host.go:66] Checking if "newest-cni-014519" exists ...
	I1025 01:48:47.949081    8088 host.go:66] Checking if "newest-cni-014519" exists ...
	I1025 01:48:47.965091    8088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:48:47.973112    8088 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:48:47.974097    8088 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:48:47.980315    8088 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:48:47.980315    8088 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:48:48.283545    8088 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 01:48:48.290545    8088 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1025 01:48:48.294549    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 01:48:48.294549    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 01:48:48.313706    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:48.342677    8088 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1025 01:48:48.337684    8088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 01:48:46.505992    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:48.527878    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:48.396393   10588 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (4.6954938s)
	I1025 01:48:48.396393   10588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 01:48:48.422885   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4 minikube.k8s.io/name=calico-012958 minikube.k8s.io/updated_at=2022_10_25T01_48_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:48.425903   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:48.439878   10588 ops.go:34] apiserver oom_adj: -16
	I1025 01:48:48.905924   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:49.659580   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:50.156985   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:50.660199   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:51.160664   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:48.345688    8088 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:48.346682    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 01:48:48.346682    8088 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 01:48:48.346682    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 01:48:48.356684    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:48.356684    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:48.483894    8088 addons.go:153] Setting addon default-storageclass=true in "newest-cni-014519"
	W1025 01:48:48.483894    8088 addons.go:162] addon default-storageclass should already be in state true
	I1025 01:48:48.483894    8088 host.go:66] Checking if "newest-cni-014519" exists ...
	I1025 01:48:48.520898    8088 cli_runner.go:164] Run: docker container inspect newest-cni-014519 --format={{.State.Status}}
	I1025 01:48:48.637185    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:48.669158    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:48.685200    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:48.781190    8088 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:48.781190    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 01:48:48.794165    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:48.801190    8088 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 01:48:48.815185    8088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-014519
	I1025 01:48:49.067752    8088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50399 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\newest-cni-014519\id_rsa Username:docker}
	I1025 01:48:49.097787    8088 api_server.go:51] waiting for apiserver process to appear ...
	I1025 01:48:49.118832    8088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:48:49.176773    8088 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 01:48:49.176773    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1025 01:48:49.216784    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 01:48:49.217792    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 01:48:49.227767    8088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:49.311183    8088 api_server.go:71] duration metric: took 1.3710688s to wait for apiserver process to appear ...
	I1025 01:48:49.311183    8088 api_server.go:87] waiting for apiserver healthz status ...
	I1025 01:48:49.311183    8088 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50398/healthz ...
	I1025 01:48:49.311183    8088 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 01:48:49.311183    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 01:48:49.389431    8088 api_server.go:278] https://127.0.0.1:50398/healthz returned 200:
	ok
	I1025 01:48:49.393467    8088 api_server.go:140] control plane version: v1.25.3
	I1025 01:48:49.393467    8088 api_server.go:130] duration metric: took 82.2838ms to wait for apiserver health ...
	I1025 01:48:49.393467    8088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 01:48:49.413392    8088 system_pods.go:59] 8 kube-system pods found
	I1025 01:48:49.413392    8088 system_pods.go:61] "coredns-565d847f94-48b4v" [cfcce53d-c202-4d51-91e2-4504a0b7ab56] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 01:48:49.413392    8088 system_pods.go:61] "etcd-newest-cni-014519" [2620db9c-d27d-4aaf-a06f-b2376f4cb8db] Running
	I1025 01:48:49.413392    8088 system_pods.go:61] "kube-apiserver-newest-cni-014519" [7b0d26ec-3d33-4c1f-af7a-f6a75fe6020d] Running
	I1025 01:48:49.413392    8088 system_pods.go:61] "kube-controller-manager-newest-cni-014519" [402c53bb-87b0-41b1-acad-7c78a38f3b7a] Running
	I1025 01:48:49.413392    8088 system_pods.go:61] "kube-proxy-f8b8r" [61f20ee0-d1f8-408e-aae2-125f4d7acfb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 01:48:49.413392    8088 system_pods.go:61] "kube-scheduler-newest-cni-014519" [f03a6adc-b04a-487a-a830-c3e40a0ca92a] Running
	I1025 01:48:49.413392    8088 system_pods.go:61] "metrics-server-5c8fd5cf8-njrkj" [48404644-efee-441b-bc99-d99ffa5d6aa6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 01:48:49.413392    8088 system_pods.go:61] "storage-provisioner" [fbb47050-0e3d-4647-8a80-81a23bf0b1f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 01:48:49.413392    8088 system_pods.go:74] duration metric: took 19.9245ms to wait for pod list to return data ...
	I1025 01:48:49.413392    8088 default_sa.go:34] waiting for default service account to be created ...
	I1025 01:48:49.481888    8088 default_sa.go:45] found service account: "default"
	I1025 01:48:49.482051    8088 default_sa.go:55] duration metric: took 68.6585ms for default service account to be created ...
	I1025 01:48:49.482051    8088 kubeadm.go:573] duration metric: took 1.5419356s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1025 01:48:49.482051    8088 node_conditions.go:102] verifying NodePressure condition ...
	I1025 01:48:49.482513    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 01:48:49.482605    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 01:48:49.496901    8088 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I1025 01:48:49.496901    8088 node_conditions.go:123] node cpu capacity is 16
	I1025 01:48:49.496901    8088 node_conditions.go:105] duration metric: took 14.8504ms to run NodePressure ...
	I1025 01:48:49.496901    8088 start.go:217] waiting for startup goroutines ...
	I1025 01:48:49.524887    8088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:49.578630    8088 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 01:48:49.579075    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 01:48:49.777464    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 01:48:49.777464    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 01:48:49.901323    8088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 01:48:50.077712    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 01:48:50.077712    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1025 01:48:50.476824    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 01:48:50.476824    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 01:48:50.685675    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 01:48:50.685675    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 01:48:50.879791    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 01:48:50.879791    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 01:48:51.005829    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 01:48:51.005829    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 01:48:51.201947    8088 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 01:48:51.201947    8088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 01:48:51.322937    8088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 01:48:54.089453    8088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.5645338s)
	I1025 01:48:54.090611    8088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.8628104s)
	I1025 01:48:54.421587    8088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.5202316s)
	I1025 01:48:54.421587    8088 addons.go:383] Verifying addon metrics-server=true in "newest-cni-014519"
	I1025 01:48:55.180629    8088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.8576648s)
	I1025 01:48:55.186631    8088 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1025 01:48:55.189625    8088 addons.go:414] enableAddons completed in 7.2494695s
	I1025 01:48:55.214631    8088 ssh_runner.go:195] Run: rm -f paused
	I1025 01:48:55.478070    8088 start.go:506] kubectl: 1.18.2, cluster: 1.25.3 (minor skew: 7)
	I1025 01:48:55.482066    8088 out.go:177] 
	W1025 01:48:55.488065    8088 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.25.3.
	I1025 01:48:55.494068    8088 out.go:177]   - Want kubectl v1.25.3? Try 'minikube kubectl -- get pods -A'
	I1025 01:48:55.506059    8088 out.go:177] * Done! kubectl is now configured to use "newest-cni-014519" cluster and "default" namespace by default
	I1025 01:48:51.015241    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:53.579426    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:51.662596   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:52.167824   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:52.660408   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:53.170199   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:53.658914   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:54.163017   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:54.665611   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:55.156608   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:56.183023   10588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:48:57.088317   10588 kubeadm.go:1067] duration metric: took 8.6918636s to wait for elevateKubeSystemPrivileges.
	I1025 01:48:57.088317   10588 kubeadm.go:398] StartCluster complete in 41.1332869s
	I1025 01:48:57.088317   10588 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:57.088317   10588 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:48:57.092317   10588 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:48:57.924190   10588 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-012958" rescaled to 1
	I1025 01:48:57.924190   10588 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:48:57.927180   10588 out.go:177] * Verifying Kubernetes components...
	I1025 01:48:57.924190   10588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 01:48:57.924190   10588 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1025 01:48:57.925186   10588 config.go:180] Loaded profile config "calico-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:48:57.931211   10588 addons.go:65] Setting storage-provisioner=true in profile "calico-012958"
	I1025 01:48:57.931211   10588 addons.go:65] Setting default-storageclass=true in profile "calico-012958"
	I1025 01:48:57.931211   10588 addons.go:153] Setting addon storage-provisioner=true in "calico-012958"
	W1025 01:48:57.931211   10588 addons.go:162] addon storage-provisioner should already be in state true
	I1025 01:48:57.931211   10588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-012958"
	I1025 01:48:57.931211   10588 host.go:66] Checking if "calico-012958" exists ...
	I1025 01:48:57.950182   10588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:48:57.960186   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:57.962184   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:58.061193   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:58.323263   10588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 01:48:55.980050    4244 pod_ready.go:102] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:56.103979    4244 pod_ready.go:92] pod "cilium-operator-656749584-pwt27" in "kube-system" namespace has status "Ready":"True"
	I1025 01:48:56.103979    4244 pod_ready.go:81] duration metric: took 14.2141846s waiting for pod "cilium-operator-656749584-pwt27" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:56.103979    4244 pod_ready.go:78] waiting up to 5m0s for pod "cilium-wr8k8" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:58.286265    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:00.481139    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:48:58.326264   10588 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:58.326264   10588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 01:48:58.343302   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:58.385307   10588 addons.go:153] Setting addon default-storageclass=true in "calico-012958"
	W1025 01:48:58.385307   10588 addons.go:162] addon default-storageclass should already be in state true
	I1025 01:48:58.385307   10588 host.go:66] Checking if "calico-012958" exists ...
	I1025 01:48:58.419267   10588 node_ready.go:35] waiting up to 5m0s for node "calico-012958" to be "Ready" ...
	I1025 01:48:58.419267   10588 cli_runner.go:164] Run: docker container inspect calico-012958 --format={{.State.Status}}
	I1025 01:48:58.583809   10588 node_ready.go:49] node "calico-012958" has status "Ready":"True"
	I1025 01:48:58.583809   10588 node_ready.go:38] duration metric: took 164.5403ms waiting for node "calico-012958" to be "Ready" ...
	I1025 01:48:58.583809   10588 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 01:48:58.618783   10588 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace to be "Ready" ...
	I1025 01:48:58.627802   10588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 01:48:58.664788   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:58.711805   10588 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 01:48:58.711805   10588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 01:48:58.726830   10588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-012958
	I1025 01:48:59.012995   10588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50354 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-012958\id_rsa Username:docker}
	I1025 01:48:59.510415   10588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:48:59.726048   10588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 01:49:00.792448   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:02.925248    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:05.424138    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:03.298250   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:05.300129   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:06.198451   10588 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.5705957s)
	I1025 01:49:06.198451   10588 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1025 01:49:06.688443   10588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.1779777s)
	I1025 01:49:06.689457   10588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.9633595s)
	I1025 01:49:06.691432   10588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-10-25 01:48:11 UTC, end at Tue 2022-10-25 01:49:10 UTC. --
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.436598700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.475331500Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.497963600Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498077000Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498096000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498104300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498112200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498119800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498503500Z" level=info msg="Loading containers: start."
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.913566000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.075557600Z" level=info msg="Loading containers: done."
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.142358100Z" level=info msg="Docker daemon" commit=e42327a graphdriver(s)=overlay2 version=20.10.18
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.142543200Z" level=info msg="Daemon has completed initialization"
	Oct 25 01:48:21 newest-cni-014519 systemd[1]: Started Docker Application Container Engine.
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.219014700Z" level=info msg="API listen on [::]:2376"
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.224018100Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 25 01:48:48 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:48.492187400Z" level=info msg="ignoring event" container=8b968f932b5af74087ecefefa9d5b8d1bed29f99482af778b38253980a744b03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:48 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:48.701741000Z" level=info msg="ignoring event" container=f47949cb36488bcc6969975a4d7637dccb3b13cb8f7710cc833055e45556a128 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:53 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:53.183067900Z" level=info msg="ignoring event" container=b61d1bc71a77ba936275e082dad48376380d31002af337e66d3a64820ad592c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:55 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:55.902999100Z" level=info msg="ignoring event" container=a5bc0978ad5b1133ec2137cc9f823009eaf73c08a17bd5ef431b32cf8b7df748 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:57 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:57.283320500Z" level=info msg="ignoring event" container=89701d2b20a9f1b4be58e846c61a205b98bdb84ef58fb0c7bf9591720606fd59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:00 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:00.359069700Z" level=info msg="ignoring event" container=56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:02 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:02.821029400Z" level=info msg="ignoring event" container=2b5aa6cd6bf285814968ffe5ad9b5aeabfc8efb36c7c2d3d0782526b08f9615a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:03 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:03.315809600Z" level=info msg="ignoring event" container=d7d47d6175b20dd0398269055aba4f03a47477527f8d5df6b5885dd1e11f02e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:03 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:03.502676200Z" level=info msg="ignoring event" container=3ee97de7a1ce626673af734ff712924242e2d515fabd82803f3ca71b42ee152a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a2108138e3005       6e38f40d628db       23 seconds ago       Running             storage-provisioner       1                   375e905850293
	47738d6c8227c       beaaf00edd38a       23 seconds ago       Running             kube-proxy                1                   c22cc3b5e7fc3
	6fdfd9f56268e       a8a176a5d5d69       37 seconds ago       Running             etcd                      1                   ad84ef6daead6
	d4b7a7ce03a2a       6039992312758       37 seconds ago       Running             kube-controller-manager   2                   eaae751434a89
	40651d26ca2af       0346dbd74bcb9       37 seconds ago       Running             kube-apiserver            1                   d5d01d92e7ed9
	d728c043f9f7c       6d23ec0e8b87e       37 seconds ago       Running             kube-scheduler            1                   4f0fa2a45d194
	8c32d977d6a4b       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   76af3fc0ba53f
	e4edba2e75564       beaaf00edd38a       About a minute ago   Exited              kube-proxy                0                   be757359bd1ef
	97497f2af52ae       6039992312758       About a minute ago   Exited              kube-controller-manager   1                   4174f19e3463a
	7ff17fe113907       0346dbd74bcb9       2 minutes ago        Exited              kube-apiserver            0                   6199efa3639ae
	669ed1999ddd8       a8a176a5d5d69       2 minutes ago        Exited              etcd                      0                   dc246491ed5a3
	d543071ca09ff       6d23ec0e8b87e       2 minutes ago        Exited              kube-scheduler            0                   08c2e2824d68e
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Oct25 01:24] WSL2: Performing memory compaction.
	[Oct25 01:25] WSL2: Performing memory compaction.
	[Oct25 01:26] process 'docker/tmp/qemu-check146077527/check' started with executable stack
	[Oct25 01:28] WSL2: Performing memory compaction.
	[Oct25 01:29] WSL2: Performing memory compaction.
	[Oct25 01:30] WSL2: Performing memory compaction.
	[Oct25 01:31] WSL2: Performing memory compaction.
	[Oct25 01:32] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.169345] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000022] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000876] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct25 01:33] WSL2: Performing memory compaction.
	[Oct25 01:35] WSL2: Performing memory compaction.
	[Oct25 01:37] WSL2: Performing memory compaction.
	[Oct25 01:46] WSL2: Performing memory compaction.
	[Oct25 01:47] WSL2: Performing memory compaction.
	[Oct25 01:49] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [669ed1999ddd] <==
	* {"level":"warn","ts":"2022-10-25T01:47:55.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:47:54.475Z","time spent":"703.1804ms","remote":"127.0.0.1:43110","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":194,"request content":"key:\"/registry/serviceaccounts/default/\" range_end:\"/registry/serviceaccounts/default0\" "}
	{"level":"info","ts":"2022-10-25T01:47:55.178Z","caller":"traceutil/trace.go:171","msg":"trace[1298668405] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:367; }","duration":"698.0532ms","start":"2022-10-25T01:47:54.480Z","end":"2022-10-25T01:47:55.178Z","steps":["trace[1298668405] 'agreement among raft nodes before linearized reading'  (duration: 202.1362ms)","trace[1298668405] 'range keys from in-memory index tree'  (duration: 495.8354ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:47:55.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:47:54.699Z","time spent":"478.8742ms","remote":"127.0.0.1:43102","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":364,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-10-25T01:47:55.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:47:54.480Z","time spent":"698.1804ms","remote":"127.0.0.1:43156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":28,"request content":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" "}
	{"level":"warn","ts":"2022-10-25T01:47:55.463Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.503ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13557105968896846591 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3c24840cd26caafe>","response":"size:40"}
	{"level":"info","ts":"2022-10-25T01:47:55.463Z","caller":"traceutil/trace.go:171","msg":"trace[1956782486] linearizableReadLoop","detail":"{readStateIndex:385; appliedIndex:384; }","duration":"262.6763ms","start":"2022-10-25T01:47:55.201Z","end":"2022-10-25T01:47:55.463Z","steps":["trace[1956782486] 'read index received'  (duration: 107.8409ms)","trace[1956782486] 'applied index is now lower than readState.Index'  (duration: 154.8309ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:47:55.463Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"262.8214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T01:47:55.464Z","caller":"traceutil/trace.go:171","msg":"trace[1094349952] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:368; }","duration":"262.9513ms","start":"2022-10-25T01:47:55.201Z","end":"2022-10-25T01:47:55.464Z","steps":["trace[1094349952] 'agreement among raft nodes before linearized reading'  (duration: 262.7866ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:47:55.464Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.1981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:47:55.464Z","caller":"traceutil/trace.go:171","msg":"trace[1372077406] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:368; }","duration":"184.4424ms","start":"2022-10-25T01:47:55.279Z","end":"2022-10-25T01:47:55.464Z","steps":["trace[1372077406] 'agreement among raft nodes before linearized reading'  (duration: 184.1356ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:47:59.710Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.5677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-5c8fd5cf8-njrkj\" ","response":"range_response_count:1 size:2935"}
	{"level":"warn","ts":"2022-10-25T01:47:59.710Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.7825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5133"}
	{"level":"warn","ts":"2022-10-25T01:47:59.710Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.9778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2022-10-25T01:47:59.710Z","caller":"traceutil/trace.go:171","msg":"trace[1054345671] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:407; }","duration":"107.8676ms","start":"2022-10-25T01:47:59.602Z","end":"2022-10-25T01:47:59.710Z","steps":["trace[1054345671] 'agreement among raft nodes before linearized reading'  (duration: 94.8578ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:47:59.710Z","caller":"traceutil/trace.go:171","msg":"trace[1238392215] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:407; }","duration":"108.0416ms","start":"2022-10-25T01:47:59.602Z","end":"2022-10-25T01:47:59.710Z","steps":["trace[1238392215] 'agreement among raft nodes before linearized reading'  (duration: 95.0134ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:47:59.710Z","caller":"traceutil/trace.go:171","msg":"trace[662358375] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-5c8fd5cf8-njrkj; range_end:; response_count:1; response_revision:407; }","duration":"114.8686ms","start":"2022-10-25T01:47:59.595Z","end":"2022-10-25T01:47:59.710Z","steps":["trace[662358375] 'agreement among raft nodes before linearized reading'  (duration: 101.9208ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:48:02.574Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-10-25T01:48:02.574Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-014519","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.17.0.2:2380"],"advertise-client-urls":["https://172.17.0.2:2379"]}
	WARNING: 2022/10/25 01:48:02 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/10/25 01:48:02 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2022-10-25T01:48:02.693Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8e14bda2255bc24","current-leader-member-id":"b8e14bda2255bc24"}
	WARNING: 2022/10/25 01:48:02 [core] grpc: addrConn.createTransport failed to connect to {172.17.0.2:2379 172.17.0.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 172.17.0.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-10-25T01:48:02.774Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"172.17.0.2:2380"}
	{"level":"info","ts":"2022-10-25T01:48:02.775Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"172.17.0.2:2380"}
	{"level":"info","ts":"2022-10-25T01:48:02.775Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-014519","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.17.0.2:2380"],"advertise-client-urls":["https://172.17.0.2:2379"]}
	
	* 
	* ==> etcd [6fdfd9f56268] <==
	* {"level":"info","ts":"2022-10-25T01:48:47.595Z","caller":"traceutil/trace.go:171","msg":"trace[513039722] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/statefulset-controller; range_end:; response_count:1; response_revision:465; }","duration":"110.4102ms","start":"2022-10-25T01:48:47.485Z","end":"2022-10-25T01:48:47.595Z","steps":["trace[513039722] 'agreement among raft nodes before linearized reading'  (duration: 107.3188ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:47.924Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.9366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2022-10-25T01:48:47.925Z","caller":"traceutil/trace.go:171","msg":"trace[528855984] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:468; }","duration":"102.5224ms","start":"2022-10-25T01:48:47.822Z","end":"2022-10-25T01:48:47.925Z","steps":["trace[528855984] 'agreement among raft nodes before linearized reading'  (duration: 52.1607ms)","trace[528855984] 'range keys from in-memory index tree'  (duration: 49.7148ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:48:49.987Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.5805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-f8b8r\" ","response":"range_response_count:1 size:4709"}
	{"level":"info","ts":"2022-10-25T01:48:49.987Z","caller":"traceutil/trace.go:171","msg":"trace[152747243] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-f8b8r; range_end:; response_count:1; response_revision:479; }","duration":"107.7198ms","start":"2022-10-25T01:48:49.879Z","end":"2022-10-25T01:48:49.987Z","steps":["trace[152747243] 'agreement among raft nodes before linearized reading'  (duration: 18.7386ms)","trace[152747243] 'range keys from in-memory index tree'  (duration: 88.6972ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:48:50.542Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"146.2591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T01:48:50.542Z","caller":"traceutil/trace.go:171","msg":"trace[1226052781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:483; }","duration":"146.4952ms","start":"2022-10-25T01:48:50.396Z","end":"2022-10-25T01:48:50.542Z","steps":["trace[1226052781] 'range keys from in-memory index tree'  (duration: 143.8739ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:50.543Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"149.6089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-014519\" ","response":"range_response_count:1 size:7267"}
	{"level":"info","ts":"2022-10-25T01:48:50.543Z","caller":"traceutil/trace.go:171","msg":"trace[139631497] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-014519; range_end:; response_count:1; response_revision:483; }","duration":"149.6479ms","start":"2022-10-25T01:48:50.393Z","end":"2022-10-25T01:48:50.543Z","steps":["trace[139631497] 'range keys from in-memory index tree'  (duration: 144.5938ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:50.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"126.0267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-014519\" ","response":"range_response_count:1 size:7573"}
	{"level":"info","ts":"2022-10-25T01:48:50.727Z","caller":"traceutil/trace.go:171","msg":"trace[2094490250] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-014519; range_end:; response_count:1; response_revision:485; }","duration":"126.1965ms","start":"2022-10-25T01:48:50.600Z","end":"2022-10-25T01:48:50.727Z","steps":["trace[2094490250] 'agreement among raft nodes before linearized reading'  (duration: 73.6448ms)","trace[2094490250] 'range keys from in-memory index tree'  (duration: 52.3471ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:48:59.329Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.2376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T01:48:59.330Z","caller":"traceutil/trace.go:171","msg":"trace[917184062] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"110.5082ms","start":"2022-10-25T01:48:59.219Z","end":"2022-10-25T01:48:59.330Z","steps":["trace[917184062] 'agreement among raft nodes before linearized reading'  (duration: 110.1891ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.6357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.513Z","caller":"traceutil/trace.go:171","msg":"trace[1314684674] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"114.757ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.513Z","steps":["trace[1314684674] 'agreement among raft nodes before linearized reading'  (duration: 114.5126ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.1933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.514Z","caller":"traceutil/trace.go:171","msg":"trace[672257262] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"115.345ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.514Z","steps":["trace[672257262] 'agreement among raft nodes before linearized reading'  (duration: 115.1413ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.514Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.514Z","caller":"traceutil/trace.go:171","msg":"trace[1301735318] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"115.6417ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.514Z","steps":["trace[1301735318] 'agreement among raft nodes before linearized reading'  (duration: 115.5513ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.515Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.3715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.515Z","caller":"traceutil/trace.go:171","msg":"trace[2093128474] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"116.5826ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.515Z","steps":["trace[2093128474] 'agreement among raft nodes before linearized reading'  (duration: 116.2474ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.811Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.5322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89\" ","response":"range_response_count:1 size:3143"}
	{"level":"info","ts":"2022-10-25T01:48:59.811Z","caller":"traceutil/trace.go:171","msg":"trace[1598526374] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89; range_end:; response_count:1; response_revision:559; }","duration":"110.6691ms","start":"2022-10-25T01:48:59.701Z","end":"2022-10-25T01:48:59.811Z","steps":["trace[1598526374] 'agreement among raft nodes before linearized reading'  (duration: 110.4538ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.811Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.1391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2022-10-25T01:48:59.812Z","caller":"traceutil/trace.go:171","msg":"trace[1074079936] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:559; }","duration":"129.4308ms","start":"2022-10-25T01:48:59.682Z","end":"2022-10-25T01:48:59.812Z","steps":["trace[1074079936] 'agreement among raft nodes before linearized reading'  (duration: 129.0877ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:49:21 up  1:55,  0 users,  load average: 14.79, 10.79, 7.92
	Linux newest-cni-014519 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [40651d26ca2a] <==
	* I1025 01:48:44.011708       1 trace.go:205] Trace[606057151]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:0a5f1a4d-6897-4004-a583-9ec790693925,client:172.17.0.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:48:43.485) (total time: 525ms):
	Trace[606057151]: ---"Write to database call finished" len:234,err:<nil> 525ms (01:48:44.011)
	Trace[606057151]: [525.8447ms] [525.8447ms] END
	I1025 01:48:44.034060       1 trace.go:205] Trace[2146592351]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:386cad93-3278-4308-888f-7a260461a76a,client:172.17.0.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:48:43.492) (total time: 541ms):
	Trace[2146592351]: ---"Write to database call finished" len:2567,err:nodes "newest-cni-014519" already exists 541ms (01:48:44.033)
	Trace[2146592351]: [541.9736ms] [541.9736ms] END
	I1025 01:48:44.451136       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1025 01:48:44.792494       1 handler_proxy.go:105] no RequestInfo found in the context
	E1025 01:48:44.792644       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1025 01:48:44.792664       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 01:48:44.792686       1 handler_proxy.go:105] no RequestInfo found in the context
	E1025 01:48:44.792808       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 01:48:44.793868       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1025 01:48:46.401647       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1025 01:48:46.512540       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1025 01:48:47.019522       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1025 01:48:47.482203       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 01:48:47.676380       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 01:48:54.081307       1 controller.go:616] quota admission added evaluator for: namespaces
	I1025 01:48:55.017969       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.254.55]
	I1025 01:48:55.112707       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.194.145]
	I1025 01:48:58.992865       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I1025 01:48:59.097235       1 controller.go:616] quota admission added evaluator for: endpoints
	I1025 01:48:59.180954       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7ff17fe11390] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:48:03.587988       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:48:03.588314       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:48:03.588321       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [97497f2af52a] <==
	* I1025 01:47:49.274522       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:47:49.274539       1 shared_informer.go:262] Caches are synced for node
	I1025 01:47:49.274586       1 range_allocator.go:166] Starting range CIDR allocator
	I1025 01:47:49.274597       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1025 01:47:49.274603       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1025 01:47:49.274617       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1025 01:47:49.274778       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	W1025 01:47:49.274798       1 node_lifecycle_controller.go:1058] Missing timestamp for Node newest-cni-014519. Assuming now as a timestamp.
	I1025 01:47:49.274835       1 taint_manager.go:209] "Sending events to api server"
	I1025 01:47:49.274859       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1025 01:47:49.275089       1 event.go:294] "Event occurred" object="newest-cni-014519" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-014519 event: Registered Node newest-cni-014519 in Controller"
	I1025 01:47:49.275147       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 01:47:49.275162       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 01:47:49.279657       1 shared_informer.go:262] Caches are synced for TTL
	I1025 01:47:49.302808       1 range_allocator.go:367] Set node newest-cni-014519 PodCIDR to [192.168.0.0/24]
	I1025 01:47:49.595933       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:47:49.680251       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:47:49.680481       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:47:49.778250       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-md6qs"
	I1025 01:47:49.800938       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-48b4v"
	I1025 01:47:49.977830       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f8b8r"
	I1025 01:47:50.257093       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I1025 01:47:50.284585       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-md6qs"
	I1025 01:47:59.476658       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I1025 01:47:59.512661       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-njrkj"
	
	* 
	* ==> kube-controller-manager [d4b7a7ce03a2] <==
	* I1025 01:48:58.844422       1 shared_informer.go:262] Caches are synced for PV protection
	I1025 01:48:58.844503       1 shared_informer.go:262] Caches are synced for disruption
	I1025 01:48:58.844542       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1025 01:48:58.844756       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1025 01:48:58.844828       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1025 01:48:58.847299       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1025 01:48:58.847521       1 shared_informer.go:262] Caches are synced for ephemeral
	I1025 01:48:58.874648       1 shared_informer.go:262] Caches are synced for stateful set
	I1025 01:48:58.876372       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	E1025 01:48:58.898070       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1025 01:48:58.898735       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 01:48:58.903708       1 shared_informer.go:262] Caches are synced for resource quota
	E1025 01:48:58.907218       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1025 01:48:58.913386       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1025 01:48:58.975433       1 shared_informer.go:262] Caches are synced for endpoint
	I1025 01:48:58.981713       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:48:58.992710       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1025 01:48:58.994908       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1025 01:48:59.020359       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I1025 01:48:59.079019       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-57bbdc5f89 to 1"
	I1025 01:48:59.304959       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-57bbdc5f89-x7jd6"
	I1025 01:48:59.314150       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:48:59.329038       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:48:59.329079       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:48:59.383316       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-pxbgw"
	
	* 
	* ==> kube-proxy [47738d6c8227] <==
	* I1025 01:48:49.101952       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:48:49.108200       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:48:49.111912       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:48:49.179016       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:48:49.196472       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1025 01:48:49.218109       1 node.go:163] Successfully retrieved node IP: 172.17.0.2
	I1025 01:48:49.218281       1 server_others.go:138] "Detected node IP" address="172.17.0.2"
	I1025 01:48:49.218323       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 01:48:49.575598       1 server_others.go:206] "Using iptables Proxier"
	I1025 01:48:49.575658       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1025 01:48:49.575679       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1025 01:48:49.575787       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1025 01:48:49.575874       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:48:49.576518       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:48:49.577336       1 server.go:661] "Version info" version="v1.25.3"
	I1025 01:48:49.578097       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:48:49.579232       1 config.go:444] "Starting node config controller"
	I1025 01:48:49.579531       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 01:48:49.581453       1 config.go:226] "Starting endpoint slice config controller"
	I1025 01:48:49.581604       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 01:48:49.592047       1 config.go:317] "Starting service config controller"
	I1025 01:48:49.592080       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 01:48:49.691660       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 01:48:49.699487       1 shared_informer.go:262] Caches are synced for service config
	I1025 01:48:49.780438       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [e4edba2e7556] <==
	* I1025 01:47:58.894397       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:47:58.897668       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:47:58.901134       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:47:58.973626       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:47:58.977374       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1025 01:47:59.007438       1 node.go:163] Successfully retrieved node IP: 172.17.0.2
	I1025 01:47:59.007649       1 server_others.go:138] "Detected node IP" address="172.17.0.2"
	I1025 01:47:59.008005       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 01:47:59.182428       1 server_others.go:206] "Using iptables Proxier"
	I1025 01:47:59.182594       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1025 01:47:59.182617       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1025 01:47:59.182644       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1025 01:47:59.182681       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:47:59.183241       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:47:59.183928       1 server.go:661] "Version info" version="v1.25.3"
	I1025 01:47:59.183973       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:47:59.184916       1 config.go:317] "Starting service config controller"
	I1025 01:47:59.184968       1 config.go:226] "Starting endpoint slice config controller"
	I1025 01:47:59.184990       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 01:47:59.184997       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 01:47:59.185206       1 config.go:444] "Starting node config controller"
	I1025 01:47:59.185226       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 01:47:59.286329       1 shared_informer.go:262] Caches are synced for service config
	I1025 01:47:59.286808       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 01:47:59.287319       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d543071ca09f] <==
	* W1025 01:47:24.322852       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 01:47:24.323004       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 01:47:24.616209       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 01:47:24.616347       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 01:47:24.716471       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 01:47:24.716641       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 01:47:24.964104       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 01:47:24.964225       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 01:47:25.898571       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 01:47:25.898697       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 01:47:26.430999       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 01:47:26.431181       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 01:47:30.275990       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 01:47:30.276120       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 01:47:32.076408       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 01:47:32.076536       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 01:47:32.389222       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 01:47:32.389352       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 01:47:34.180577       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 01:47:34.180723       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 01:47:51.891214       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:48:02.375282       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E1025 01:48:02.375457       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E1025 01:48:02.375475       1 run.go:74] "command failed" err="finished without leader elect"
	I1025 01:48:02.375529       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [d728c043f9f7] <==
	* W1025 01:48:34.387403       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I1025 01:48:36.720428       1 serving.go:348] Generated self-signed cert in-memory
	W1025 01:48:43.592533       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 01:48:43.592580       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 01:48:43.592601       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 01:48:43.592618       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 01:48:43.696239       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1025 01:48:43.696400       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:48:43.700970       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 01:48:43.701257       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 01:48:43.701279       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:48:43.702533       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 01:48:43.802465       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-10-25 01:48:11 UTC, end at Tue 2022-10-25 01:49:22 UTC. --
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         rpc error: code = Unknown desc = [failed to set up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to set up pod "coredns-565d847f94-48b4v_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to teardown pod "coredns-565d847f94-48b4v_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: "crio" id: "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         Try `iptables -h' or 'iptables --help' for more information.
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         ]
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:  >
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]: E1025 01:49:00.645763    1203 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=<
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         rpc error: code = Unknown desc = [failed to set up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to set up pod "coredns-565d847f94-48b4v_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to teardown pod "coredns-565d847f94-48b4v_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: "crio" id: "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         Try `iptables -h' or 'iptables --help' for more information.
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         ]
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:  > pod="kube-system/coredns-565d847f94-48b4v"
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]: E1025 01:49:00.645800    1203 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err=<
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         rpc error: code = Unknown desc = [failed to set up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to set up pod "coredns-565d847f94-48b4v_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to teardown pod "coredns-565d847f94-48b4v_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: "crio" id: "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         Try `iptables -h' or 'iptables --help' for more information.
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         ]
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:  > pod="kube-system/coredns-565d847f94-48b4v"
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]: E1025 01:49:00.646026    1203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-565d847f94-48b4v_kube-system(cfcce53d-c202-4d51-91e2-4504a0b7ab56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-565d847f94-48b4v_kube-system(cfcce53d-c202-4d51-91e2-4504a0b7ab56)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00\\\" network for pod \\\"coredns-565d847f94-48b4v\\\": networkPlugin cni failed to set up pod \\\"coredns-565d847f94-48b4v_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00\\\" network for pod \\\"coredns-565d847f94-48b4v\\\": networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-48b4v_kube-system\\\" n
etwork: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: \\\"crio\\\" id: \\\"56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-565d847f94-48b4v" podUID=cfcce53d-c202-4d51-91e2-4504a0b7ab56
	Oct 25 01:49:01 newest-cni-014519 kubelet[1203]: I1025 01:49:01.720166    1203 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2b5aa6cd6bf285814968ffe5ad9b5aeabfc8efb36c7c2d3d0782526b08f9615a"
	Oct 25 01:49:02 newest-cni-014519 kubelet[1203]: I1025 01:49:02.353449    1203 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d7d47d6175b20dd0398269055aba4f03a47477527f8d5df6b5885dd1e11f02e5"
	Oct 25 01:49:02 newest-cni-014519 kubelet[1203]: I1025 01:49:02.483716    1203 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3ee97de7a1ce626673af734ff712924242e2d515fabd82803f3ca71b42ee152a"
	Oct 25 01:49:03 newest-cni-014519 kubelet[1203]: I1025 01:49:03.008044    1203 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 01:49:03 newest-cni-014519 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Oct 25 01:49:03 newest-cni-014519 systemd[1]: kubelet.service: Succeeded.
	Oct 25 01:49:03 newest-cni-014519 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [8c32d977d6a4] <==
	* I1025 01:47:58.687763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	* 
	* ==> storage-provisioner [a2108138e300] <==
	* I1025 01:48:49.091272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 01:49:20.599980    9232 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-014519 -n newest-cni-014519
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-014519 -n newest-cni-014519: exit status 2 (1.8347277s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-014519" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-014519
helpers_test.go:235: (dbg) docker inspect newest-cni-014519:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6",
	        "Created": "2022-10-25T01:46:01.6707992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-10-25T01:48:10.7022639Z",
	            "FinishedAt": "2022-10-25T01:48:04.5419745Z"
	        },
	        "Image": "sha256:bee7563418bf494c9ba81d904a81ea2c80a1e144325734b9d4b288db23240ab5",
	        "ResolvConfPath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/hostname",
	        "HostsPath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/hosts",
	        "LogPath": "/var/lib/docker/containers/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6/56bd31edd99a33eef2d96a448aaca4408f21e51f10e086cc12177f030b7c3fb6-json.log",
	        "Name": "/newest-cni-014519",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-014519:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7-init/diff:/var/lib/docker/overlay2/1d72d69c076943d6cd413bc50b6a474779145c6396136b4aef1829c16f4a6d69/diff:/var/lib/docker/overlay2/2712457ef6b3ec08714d64e5261a9b327c3f8db2156d7a1b493340af804c46f1/diff:/var/lib/docker/overlay2/956ad2e584ed04429b79ab0ee4bdc8977af3fcfbab3cc0ed570922cc07ffd0a6/diff:/var/lib/docker/overlay2/c4f80c5076f71429b4266dc613d1850e7295faded99f05e04fcb13d2cb4d3157/diff:/var/lib/docker/overlay2/18b12a09b44604345877d4490348801b993263f747090a3a48eac835ac323d86/diff:/var/lib/docker/overlay2/6ce1e052ac8d5221cb1978a93a4c4d18c74da80e998b6e54246cdc95997a769f/diff:/var/lib/docker/overlay2/9e6e7c177b550c9c4fc4af8222ccc9bfe5b01fa177f08388c541fde750e4df80/diff:/var/lib/docker/overlay2/c56ad1fbd8fd09ba635cb91b82c303fab8be925f82edac48c47ed2b99f054b36/diff:/var/lib/docker/overlay2/b4a229acad56b83bd9d04813f3f4cf0c8c562169b12ef1e88243f4588d0b28f9/diff:/var/lib/docker/overlay2/56f30b
af9b74a7e6afda16e0f90a1863a3db06b5fec5cf06828152edc0faa420/diff:/var/lib/docker/overlay2/4275e6a6be34231198b756601a3b51a1d8446e8830b1c4037b20370047b88b9e/diff:/var/lib/docker/overlay2/0a9f47913b546daa2d558a978beaaa9e1e7e73a568fa1ee9d198e1e2154d3f75/diff:/var/lib/docker/overlay2/f1895cfb690eaa9bf966dd3f040878344a80c0dc3606dd2d5e67d9495cfa3ff8/diff:/var/lib/docker/overlay2/84335bbaf957cb1942f1d774b817e78297dbe5ffeb7e2e406e7492cf5a720c7e/diff:/var/lib/docker/overlay2/d9a26e65c06347ae6f8f306617639febfee5427dffa6d33a6acb3abfc22092fb/diff:/var/lib/docker/overlay2/a6893072e83e913a455da1f55020a69e4cd75c9ca7b9893e47d184eaf0da806d/diff:/var/lib/docker/overlay2/2d4c8dbcc1a6e63159280d831a4e448df4587dae065b53837a0e735e579361c4/diff:/var/lib/docker/overlay2/6fd2d854ad2aede74411487bcfe2f1fa3c4e1bbfad739455a690a5801c7c9d18/diff:/var/lib/docker/overlay2/d8435d49436e1e6d94054688732a28cdf047031ca600d938ab879a3f72791749/diff:/var/lib/docker/overlay2/618bd9835cc6596945db86c2cd23a6ea6c60992ff42cb8ba7a13f96776d79bb3/diff:/var/lib/d
ocker/overlay2/8e9af4c331a1374dad5f203889fa4953cd3111c705011d2f885ce8a3a04daf2c/diff:/var/lib/docker/overlay2/b8b4d702f888aa572be928e4e449cfaed5da2a045d94f145c0d48b2f838a2dc5/diff:/var/lib/docker/overlay2/6b708706c388c674df30fea4b16deb3b96447089d2a1cd5341ef199bd5dc3c4e/diff:/var/lib/docker/overlay2/f3bab3644fefb2215fd7b4b857958be30f575fd080ec37030b8b970e46155cdc/diff:/var/lib/docker/overlay2/809d38d9cc75c39f4eab1c2c64257e010b66f6dd17717a251371701f51b07237/diff:/var/lib/docker/overlay2/b2fc12e35954dea9baf6e418bbc1b629a71863e855e4373e8d665590cd7cbc54/diff:/var/lib/docker/overlay2/34dcaea23605015741cd4c620ce445c935ca6a08892a5aa15165a8422bb013c0/diff:/var/lib/docker/overlay2/4c362976bdb9f18c68d5c294dc08d7939899992ed5f8bb13ab34f58ec03fcdd6/diff:/var/lib/docker/overlay2/316879c125d7c6ab5ddb970715d730f6a9ea41f2b58da1ac9379b1d528a25970/diff:/var/lib/docker/overlay2/241a6ea1a0e862f8ac9d51e14f03999907acd9030349143120fad52b3c1c2b97/diff:/var/lib/docker/overlay2/c64f861002875793ea9a7d58a0e0b96ad95c3c7fb2874b758d4fb1bc26c
34587/diff:/var/lib/docker/overlay2/9b91106560e299e000b1229f3c2774c8ff0b881dbb4a27b80b89d0287f2f581d/diff:/var/lib/docker/overlay2/48a0a6d3a2a4100e68d167121a7df5a2244821b71406e29d5cc8220307ed9847/diff:/var/lib/docker/overlay2/1f280e54c1637034501f87fed8ca123799984880082b190271d5fa183974cb70/diff:/var/lib/docker/overlay2/8b8d91bd6daf07b06612bec716b08ed3d8032a4caa291548eead78a2b2c7e037/diff:/var/lib/docker/overlay2/b3ab8284e9708da3d4a94f3bd549609f23fcc286b4c1522cdb244344a4957bba/diff:/var/lib/docker/overlay2/7cc92644ec11a70cec25faf398c533eaa555c3a0ab3e783bf6f0cb342f18de20/diff:/var/lib/docker/overlay2/7f44e48c3f9293e16b6fedacc411012e83674000293a110908fcbe7b8aa0f56c/diff:/var/lib/docker/overlay2/7ded7fd7dc10119d3c74efa565ab8580571328086d82d5e795e7adcd3276e653/diff:/var/lib/docker/overlay2/b4654f15c85f235a8a9d5b03067d9aacd8d02569b48170551e8cc1fb340698ad/diff:/var/lib/docker/overlay2/901a06d4c922f4dcb994eec1c950879f560844312e104093523c1f1637594c70/diff:/var/lib/docker/overlay2/0fdbbeb11fdbed96bd80868c62d4c13bf887e7
83043225667d2bde711d03b757/diff",
	                "MergedDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/458f5fd9996cbd7add54976159c06dcfb8677fe1dda1c55ff453f3e36f85c3d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-014519",
	                "Source": "/var/lib/docker/volumes/newest-cni-014519/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-014519",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-014519",
	                "name.minikube.sigs.k8s.io": "newest-cni-014519",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ac6ba89d1f1480230cb193557db85aec0735e65ddd6ee8e54cc0af6bc3fc6a6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50399"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50395"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50396"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50398"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9ac6ba89d1f1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "59dd7bd9956d3c371671e9429da5e61a79cca582c848dd2a23d7fca2654cac72",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "3d13f6afc0480320c24c724d761e552bf16a8baec115a212b99351bb4c3bc4ea",
	                    "EndpointID": "59dd7bd9956d3c371671e9429da5e61a79cca582c848dd2a23d7fca2654cac72",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519
E1025 01:49:26.028744    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519: exit status 2 (1.9448972s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-014519 logs -n 25
E1025 01:49:36.283148    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-014519 logs -n 25: (14.2507216s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| unpause | -p no-preload-013544                                       | no-preload-013544            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:45 GMT | 25 Oct 22 01:46 GMT |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p no-preload-013544                                       | no-preload-013544            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	| ssh     | -p old-k8s-version-013521 sudo                             | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | crictl images -o json                                      |                              |                   |         |                     |                     |
	| delete  | -p no-preload-013544                                       | no-preload-013544            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	| pause   | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| start   | -p auto-012955 --memory=2048                               | auto-012955                  | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:48 GMT |
	|         | --alsologtostderr                                          |                              |                   |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| unpause | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |                   |         |                     |                     |
	| delete  | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	| pause   | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:46 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p old-k8s-version-013521                                  | old-k8s-version-013521       | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:46 GMT | 25 Oct 22 01:47 GMT |
	| start   | -p cilium-012958 --memory=2048                             | cilium-012958                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT | 25 Oct 22 01:47 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-013732 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT | 25 Oct 22 01:47 GMT |
	|         | default-k8s-diff-port-013732                               |                              |                   |         |                     |                     |
	| start   | -p calico-012958 --memory=2048                             | calico-012958                | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=calico                             |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-014519                 | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:47 GMT | 25 Oct 22 01:48 GMT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |                   |         |                     |                     |
	| stop    | -p newest-cni-014519                                       | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | --alsologtostderr -v=3                                     |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-014519                      | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |                   |         |                     |                     |
	| start   | -p newest-cni-014519 --memory=2200 --alsologtostderr       | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |                   |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.25.3               |                              |                   |         |                     |                     |
	| ssh     | -p auto-012955 pgrep -a                                    | auto-012955                  | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:48 GMT |
	|         | kubelet                                                    |                              |                   |         |                     |                     |
	| ssh     | -p newest-cni-014519 sudo                                  | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:48 GMT | 25 Oct 22 01:49 GMT |
	|         | crictl images -o json                                      |                              |                   |         |                     |                     |
	| pause   | -p newest-cni-014519                                       | newest-cni-014519            | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:49 GMT |                     |
	|         | --alsologtostderr -v=1                                     |                              |                   |         |                     |                     |
	| delete  | -p auto-012955                                             | auto-012955                  | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:49 GMT | 25 Oct 22 01:49 GMT |
	| start   | -p false-012957 --memory=2048                              | false-012957                 | minikube8\jenkins | v1.27.1 | 25 Oct 22 01:49 GMT |                     |
	|         | --alsologtostderr --wait=true                              |                              |                   |         |                     |                     |
	|         | --wait-timeout=5m --cni=false                              |                              |                   |         |                     |                     |
	|         | --driver=docker                                            |                              |                   |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/25 01:49:18
	Running on machine: minikube8
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 01:49:18.647979    6252 out.go:296] Setting OutFile to fd 864 ...
	I1025 01:49:18.710114    6252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:49:18.710114    6252 out.go:309] Setting ErrFile to fd 1608...
	I1025 01:49:18.710114    6252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:49:18.730120    6252 out.go:303] Setting JSON to false
	I1025 01:49:18.732134    6252 start.go:116] hostinfo: {"hostname":"minikube8","uptime":12203,"bootTime":1666650355,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 01:49:18.732134    6252 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 01:49:18.737117    6252 out.go:177] * [false-012957] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 01:49:18.740108    6252 notify.go:220] Checking for updates...
	I1025 01:49:18.743118    6252 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:49:18.747111    6252 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 01:49:18.753103    6252 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 01:49:18.755104    6252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 01:49:18.757104    6252 config.go:180] Loaded profile config "calico-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:49:18.758101    6252 config.go:180] Loaded profile config "cilium-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:49:18.758101    6252 config.go:180] Loaded profile config "newest-cni-014519": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:49:18.758101    6252 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 01:49:19.080936    6252 docker.go:137] docker version: linux-20.10.17
	I1025 01:49:19.096521    6252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:49:19.727746    6252 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-10-25 01:49:19.2700935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:49:19.732794    6252 out.go:177] * Using the docker driver based on user configuration
	I1025 01:49:19.738754    6252 start.go:282] selected driver: docker
	I1025 01:49:19.738754    6252 start.go:808] validating driver "docker" against <nil>
	I1025 01:49:19.739742    6252 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 01:49:19.817806    6252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:49:20.411211    6252 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:61 SystemTime:2022-10-25 01:49:19.9887595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:49:20.411211    6252 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 01:49:20.412177    6252 start_flags.go:885] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 01:49:20.415186    6252 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 01:49:20.417189    6252 cni.go:95] Creating CNI manager for "false"
	I1025 01:49:20.417189    6252 start_flags.go:317] config:
	{Name:false-012957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:false-012957 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:49:20.419190    6252 out.go:177] * Starting control plane node false-012957 in cluster false-012957
	I1025 01:49:20.423174    6252 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 01:49:20.425174    6252 out.go:177] * Pulling base image ...
	I1025 01:49:16.733117    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:18.999840    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:20.428175    6252 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:49:20.428175    6252 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 01:49:20.428175    6252 cache.go:57] Caching tarball of preloaded images
	I1025 01:49:20.428175    6252 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 01:49:20.428175    6252 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 01:49:20.428175    6252 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 01:49:20.429182    6252 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\config.json ...
	I1025 01:49:20.429182    6252 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\config.json: {Name:mk39c3f86e3f0bbf15363594883bb47ee3d14089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:49:20.664994    6252 image.go:86] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 01:49:20.664994    6252 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 01:49:20.664994    6252 cache.go:208] Successfully downloaded all kic artifacts
	I1025 01:49:20.664994    6252 start.go:364] acquiring machines lock for false-012957: {Name:mk23dfc272c3bd67409b35e024beced261a39f2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:49:20.664994    6252 start.go:368] acquired machines lock for "false-012957" in 0s
	I1025 01:49:20.664994    6252 start.go:93] Provisioning new machine with config: &{Name:false-012957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:false-012957 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:49:20.664994    6252 start.go:125] createHost starting for "" (driver="docker")
	I1025 01:49:16.611472   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:18.720116   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:20.783596   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:20.974202    6252 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 01:49:20.976284    6252 start.go:159] libmachine.API.Create for "false-012957" (driver="docker")
	I1025 01:49:20.976850    6252 client.go:168] LocalClient.Create starting
	I1025 01:49:20.977441    6252 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I1025 01:49:20.977772    6252 main.go:134] libmachine: Decoding PEM data...
	I1025 01:49:20.977772    6252 main.go:134] libmachine: Parsing certificate...
	I1025 01:49:20.978121    6252 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I1025 01:49:20.978292    6252 main.go:134] libmachine: Decoding PEM data...
	I1025 01:49:20.978424    6252 main.go:134] libmachine: Parsing certificate...
	I1025 01:49:21.005078    6252 cli_runner.go:164] Run: docker network inspect false-012957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 01:49:21.231166    6252 cli_runner.go:211] docker network inspect false-012957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 01:49:21.237162    6252 network_create.go:272] running [docker network inspect false-012957] to gather additional debugging logs...
	I1025 01:49:21.237162    6252 cli_runner.go:164] Run: docker network inspect false-012957
	W1025 01:49:21.450174    6252 cli_runner.go:211] docker network inspect false-012957 returned with exit code 1
	I1025 01:49:21.450174    6252 network_create.go:275] error running [docker network inspect false-012957]: docker network inspect false-012957: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-012957
	I1025 01:49:21.450174    6252 network_create.go:277] output of [docker network inspect false-012957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-012957
	
	** /stderr **
	I1025 01:49:21.457659    6252 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 01:49:21.728608    6252 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00058aa20] misses:0}
	I1025 01:49:21.728608    6252 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:21.728608    6252 network_create.go:115] attempt to create docker network false-012957 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 01:49:21.737632    6252 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957
	W1025 01:49:21.967958    6252 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957 returned with exit code 1
	W1025 01:49:21.968094    6252 network_create.go:107] failed to create docker network false-012957 192.168.49.0/24, will retry: subnet is taken
	I1025 01:49:21.998892    6252 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:false}} dirty:map[] misses:0}
	I1025 01:49:21.998892    6252 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.018866    6252 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:true}} dirty:map[192.168.49.0:0xc00058aa20 192.168.58.0:0xc00071c310] misses:0}
	I1025 01:49:22.018866    6252 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.018866    6252 network_create.go:115] attempt to create docker network false-012957 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 01:49:22.025575    6252 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957
	W1025 01:49:22.261312    6252 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957 returned with exit code 1
	W1025 01:49:22.261312    6252 network_create.go:107] failed to create docker network false-012957 192.168.58.0/24, will retry: subnet is taken
	I1025 01:49:22.288315    6252 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:true}} dirty:map[192.168.49.0:0xc00058aa20 192.168.58.0:0xc00071c310] misses:1}
	I1025 01:49:22.288315    6252 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.314299    6252 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:true}} dirty:map[192.168.49.0:0xc00058aa20 192.168.58.0:0xc00071c310 192.168.67.0:0xc00071c288] misses:1}
	I1025 01:49:22.314299    6252 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.314299    6252 network_create.go:115] attempt to create docker network false-012957 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 01:49:22.321303    6252 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957
	W1025 01:49:22.559995    6252 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957 returned with exit code 1
	W1025 01:49:22.559995    6252 network_create.go:107] failed to create docker network false-012957 192.168.67.0/24, will retry: subnet is taken
	I1025 01:49:22.589548    6252 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:true}} dirty:map[192.168.49.0:0xc00058aa20 192.168.58.0:0xc00071c310 192.168.67.0:0xc00071c288] misses:2}
	I1025 01:49:22.590040    6252 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.617844    6252 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:true}} dirty:map[192.168.49.0:0xc00058aa20 192.168.58.0:0xc00071c310 192.168.67.0:0xc00071c288 192.168.76.0:0xc000608318] misses:2}
	I1025 01:49:22.617844    6252 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.617844    6252 network_create.go:115] attempt to create docker network false-012957 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 01:49:22.624820    6252 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957
	W1025 01:49:22.872016    6252 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957 returned with exit code 1
	W1025 01:49:22.872134    6252 network_create.go:107] failed to create docker network false-012957 192.168.76.0/24, will retry: subnet is taken
	I1025 01:49:22.912761    6252 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:true}} dirty:map[192.168.49.0:0xc00058aa20 192.168.58.0:0xc00071c310 192.168.67.0:0xc00071c288 192.168.76.0:0xc000608318] misses:3}
	I1025 01:49:22.912761    6252 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.931756    6252 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058aa20] amended:true}} dirty:map[192.168.49.0:0xc00058aa20 192.168.58.0:0xc00071c310 192.168.67.0:0xc00071c288 192.168.76.0:0xc000608318 192.168.85.0:0xc00071c328] misses:3}
	I1025 01:49:22.931756    6252 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:22.931756    6252 network_create.go:115] attempt to create docker network false-012957 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 01:49:22.938761    6252 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-012957 false-012957
	I1025 01:49:23.317544    6252 network_create.go:99] docker network false-012957 192.168.85.0/24 created
	I1025 01:49:23.317544    6252 kic.go:106] calculated static IP "192.168.85.2" for the "false-012957" container
	I1025 01:49:23.331547    6252 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 01:49:23.610907    6252 cli_runner.go:164] Run: docker volume create false-012957 --label name.minikube.sigs.k8s.io=false-012957 --label created_by.minikube.sigs.k8s.io=true
	I1025 01:49:21.094181    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:23.405237    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:25.413360    4244 pod_ready.go:102] pod "cilium-wr8k8" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:22.982041   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	I1025 01:49:25.275383   10588 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-jg9k7" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-10-25 01:48:11 UTC, end at Tue 2022-10-25 01:49:30 UTC. --
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.436598700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.475331500Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.497963600Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498077000Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498096000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498104300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498112200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498119800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.498503500Z" level=info msg="Loading containers: start."
	Oct 25 01:48:20 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:20.913566000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.075557600Z" level=info msg="Loading containers: done."
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.142358100Z" level=info msg="Docker daemon" commit=e42327a graphdriver(s)=overlay2 version=20.10.18
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.142543200Z" level=info msg="Daemon has completed initialization"
	Oct 25 01:48:21 newest-cni-014519 systemd[1]: Started Docker Application Container Engine.
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.219014700Z" level=info msg="API listen on [::]:2376"
	Oct 25 01:48:21 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:21.224018100Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 25 01:48:48 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:48.492187400Z" level=info msg="ignoring event" container=8b968f932b5af74087ecefefa9d5b8d1bed29f99482af778b38253980a744b03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:48 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:48.701741000Z" level=info msg="ignoring event" container=f47949cb36488bcc6969975a4d7637dccb3b13cb8f7710cc833055e45556a128 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:53 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:53.183067900Z" level=info msg="ignoring event" container=b61d1bc71a77ba936275e082dad48376380d31002af337e66d3a64820ad592c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:55 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:55.902999100Z" level=info msg="ignoring event" container=a5bc0978ad5b1133ec2137cc9f823009eaf73c08a17bd5ef431b32cf8b7df748 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:48:57 newest-cni-014519 dockerd[641]: time="2022-10-25T01:48:57.283320500Z" level=info msg="ignoring event" container=89701d2b20a9f1b4be58e846c61a205b98bdb84ef58fb0c7bf9591720606fd59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:00 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:00.359069700Z" level=info msg="ignoring event" container=56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:02 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:02.821029400Z" level=info msg="ignoring event" container=2b5aa6cd6bf285814968ffe5ad9b5aeabfc8efb36c7c2d3d0782526b08f9615a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:03 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:03.315809600Z" level=info msg="ignoring event" container=d7d47d6175b20dd0398269055aba4f03a47477527f8d5df6b5885dd1e11f02e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 01:49:03 newest-cni-014519 dockerd[641]: time="2022-10-25T01:49:03.502676200Z" level=info msg="ignoring event" container=3ee97de7a1ce626673af734ff712924242e2d515fabd82803f3ca71b42ee152a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a2108138e3005       6e38f40d628db       43 seconds ago       Running             storage-provisioner       1                   375e905850293
	47738d6c8227c       beaaf00edd38a       43 seconds ago       Running             kube-proxy                1                   c22cc3b5e7fc3
	6fdfd9f56268e       a8a176a5d5d69       57 seconds ago       Running             etcd                      1                   ad84ef6daead6
	d4b7a7ce03a2a       6039992312758       57 seconds ago       Running             kube-controller-manager   2                   eaae751434a89
	40651d26ca2af       0346dbd74bcb9       57 seconds ago       Running             kube-apiserver            1                   d5d01d92e7ed9
	d728c043f9f7c       6d23ec0e8b87e       57 seconds ago       Running             kube-scheduler            1                   4f0fa2a45d194
	8c32d977d6a4b       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   76af3fc0ba53f
	e4edba2e75564       beaaf00edd38a       About a minute ago   Exited              kube-proxy                0                   be757359bd1ef
	97497f2af52ae       6039992312758       2 minutes ago        Exited              kube-controller-manager   1                   4174f19e3463a
	7ff17fe113907       0346dbd74bcb9       2 minutes ago        Exited              kube-apiserver            0                   6199efa3639ae
	669ed1999ddd8       a8a176a5d5d69       2 minutes ago        Exited              etcd                      0                   dc246491ed5a3
	d543071ca09ff       6d23ec0e8b87e       2 minutes ago        Exited              kube-scheduler            0                   08c2e2824d68e
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Oct25 01:25] WSL2: Performing memory compaction.
	[Oct25 01:26] process 'docker/tmp/qemu-check146077527/check' started with executable stack
	[Oct25 01:28] WSL2: Performing memory compaction.
	[Oct25 01:29] WSL2: Performing memory compaction.
	[Oct25 01:30] WSL2: Performing memory compaction.
	[Oct25 01:31] WSL2: Performing memory compaction.
	[Oct25 01:32] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.169345] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000022] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000876] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct25 01:33] WSL2: Performing memory compaction.
	[Oct25 01:35] WSL2: Performing memory compaction.
	[Oct25 01:37] WSL2: Performing memory compaction.
	[Oct25 01:46] WSL2: Performing memory compaction.
	[Oct25 01:47] WSL2: Performing memory compaction.
	[Oct25 01:49] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [669ed1999ddd] <==
	* {"level":"warn","ts":"2022-10-25T01:47:55.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:47:54.475Z","time spent":"703.1804ms","remote":"127.0.0.1:43110","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":194,"request content":"key:\"/registry/serviceaccounts/default/\" range_end:\"/registry/serviceaccounts/default0\" "}
	{"level":"info","ts":"2022-10-25T01:47:55.178Z","caller":"traceutil/trace.go:171","msg":"trace[1298668405] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:367; }","duration":"698.0532ms","start":"2022-10-25T01:47:54.480Z","end":"2022-10-25T01:47:55.178Z","steps":["trace[1298668405] 'agreement among raft nodes before linearized reading'  (duration: 202.1362ms)","trace[1298668405] 'range keys from in-memory index tree'  (duration: 495.8354ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:47:55.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:47:54.699Z","time spent":"478.8742ms","remote":"127.0.0.1:43102","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":364,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-10-25T01:47:55.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-10-25T01:47:54.480Z","time spent":"698.1804ms","remote":"127.0.0.1:43156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":28,"request content":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" "}
	{"level":"warn","ts":"2022-10-25T01:47:55.463Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.503ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13557105968896846591 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3c24840cd26caafe>","response":"size:40"}
	{"level":"info","ts":"2022-10-25T01:47:55.463Z","caller":"traceutil/trace.go:171","msg":"trace[1956782486] linearizableReadLoop","detail":"{readStateIndex:385; appliedIndex:384; }","duration":"262.6763ms","start":"2022-10-25T01:47:55.201Z","end":"2022-10-25T01:47:55.463Z","steps":["trace[1956782486] 'read index received'  (duration: 107.8409ms)","trace[1956782486] 'applied index is now lower than readState.Index'  (duration: 154.8309ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:47:55.463Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"262.8214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T01:47:55.464Z","caller":"traceutil/trace.go:171","msg":"trace[1094349952] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:368; }","duration":"262.9513ms","start":"2022-10-25T01:47:55.201Z","end":"2022-10-25T01:47:55.464Z","steps":["trace[1094349952] 'agreement among raft nodes before linearized reading'  (duration: 262.7866ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:47:55.464Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.1981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:47:55.464Z","caller":"traceutil/trace.go:171","msg":"trace[1372077406] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:368; }","duration":"184.4424ms","start":"2022-10-25T01:47:55.279Z","end":"2022-10-25T01:47:55.464Z","steps":["trace[1372077406] 'agreement among raft nodes before linearized reading'  (duration: 184.1356ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:47:59.710Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.5677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-5c8fd5cf8-njrkj\" ","response":"range_response_count:1 size:2935"}
	{"level":"warn","ts":"2022-10-25T01:47:59.710Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.7825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5133"}
	{"level":"warn","ts":"2022-10-25T01:47:59.710Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.9778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2022-10-25T01:47:59.710Z","caller":"traceutil/trace.go:171","msg":"trace[1054345671] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:407; }","duration":"107.8676ms","start":"2022-10-25T01:47:59.602Z","end":"2022-10-25T01:47:59.710Z","steps":["trace[1054345671] 'agreement among raft nodes before linearized reading'  (duration: 94.8578ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:47:59.710Z","caller":"traceutil/trace.go:171","msg":"trace[1238392215] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:407; }","duration":"108.0416ms","start":"2022-10-25T01:47:59.602Z","end":"2022-10-25T01:47:59.710Z","steps":["trace[1238392215] 'agreement among raft nodes before linearized reading'  (duration: 95.0134ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:47:59.710Z","caller":"traceutil/trace.go:171","msg":"trace[662358375] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-5c8fd5cf8-njrkj; range_end:; response_count:1; response_revision:407; }","duration":"114.8686ms","start":"2022-10-25T01:47:59.595Z","end":"2022-10-25T01:47:59.710Z","steps":["trace[662358375] 'agreement among raft nodes before linearized reading'  (duration: 101.9208ms)"],"step_count":1}
	{"level":"info","ts":"2022-10-25T01:48:02.574Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-10-25T01:48:02.574Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-014519","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.17.0.2:2380"],"advertise-client-urls":["https://172.17.0.2:2379"]}
	WARNING: 2022/10/25 01:48:02 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/10/25 01:48:02 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2022-10-25T01:48:02.693Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8e14bda2255bc24","current-leader-member-id":"b8e14bda2255bc24"}
	WARNING: 2022/10/25 01:48:02 [core] grpc: addrConn.createTransport failed to connect to {172.17.0.2:2379 172.17.0.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 172.17.0.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-10-25T01:48:02.774Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"172.17.0.2:2380"}
	{"level":"info","ts":"2022-10-25T01:48:02.775Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"172.17.0.2:2380"}
	{"level":"info","ts":"2022-10-25T01:48:02.775Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-014519","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.17.0.2:2380"],"advertise-client-urls":["https://172.17.0.2:2379"]}
	
	* 
	* ==> etcd [6fdfd9f56268] <==
	* {"level":"info","ts":"2022-10-25T01:48:47.595Z","caller":"traceutil/trace.go:171","msg":"trace[513039722] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/statefulset-controller; range_end:; response_count:1; response_revision:465; }","duration":"110.4102ms","start":"2022-10-25T01:48:47.485Z","end":"2022-10-25T01:48:47.595Z","steps":["trace[513039722] 'agreement among raft nodes before linearized reading'  (duration: 107.3188ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:47.924Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.9366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2022-10-25T01:48:47.925Z","caller":"traceutil/trace.go:171","msg":"trace[528855984] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:468; }","duration":"102.5224ms","start":"2022-10-25T01:48:47.822Z","end":"2022-10-25T01:48:47.925Z","steps":["trace[528855984] 'agreement among raft nodes before linearized reading'  (duration: 52.1607ms)","trace[528855984] 'range keys from in-memory index tree'  (duration: 49.7148ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:48:49.987Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.5805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-f8b8r\" ","response":"range_response_count:1 size:4709"}
	{"level":"info","ts":"2022-10-25T01:48:49.987Z","caller":"traceutil/trace.go:171","msg":"trace[152747243] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-f8b8r; range_end:; response_count:1; response_revision:479; }","duration":"107.7198ms","start":"2022-10-25T01:48:49.879Z","end":"2022-10-25T01:48:49.987Z","steps":["trace[152747243] 'agreement among raft nodes before linearized reading'  (duration: 18.7386ms)","trace[152747243] 'range keys from in-memory index tree'  (duration: 88.6972ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:48:50.542Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"146.2591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T01:48:50.542Z","caller":"traceutil/trace.go:171","msg":"trace[1226052781] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:483; }","duration":"146.4952ms","start":"2022-10-25T01:48:50.396Z","end":"2022-10-25T01:48:50.542Z","steps":["trace[1226052781] 'range keys from in-memory index tree'  (duration: 143.8739ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:50.543Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"149.6089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-014519\" ","response":"range_response_count:1 size:7267"}
	{"level":"info","ts":"2022-10-25T01:48:50.543Z","caller":"traceutil/trace.go:171","msg":"trace[139631497] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-014519; range_end:; response_count:1; response_revision:483; }","duration":"149.6479ms","start":"2022-10-25T01:48:50.393Z","end":"2022-10-25T01:48:50.543Z","steps":["trace[139631497] 'range keys from in-memory index tree'  (duration: 144.5938ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:50.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"126.0267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-newest-cni-014519\" ","response":"range_response_count:1 size:7573"}
	{"level":"info","ts":"2022-10-25T01:48:50.727Z","caller":"traceutil/trace.go:171","msg":"trace[2094490250] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-newest-cni-014519; range_end:; response_count:1; response_revision:485; }","duration":"126.1965ms","start":"2022-10-25T01:48:50.600Z","end":"2022-10-25T01:48:50.727Z","steps":["trace[2094490250] 'agreement among raft nodes before linearized reading'  (duration: 73.6448ms)","trace[2094490250] 'range keys from in-memory index tree'  (duration: 52.3471ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-25T01:48:59.329Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.2376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-10-25T01:48:59.330Z","caller":"traceutil/trace.go:171","msg":"trace[917184062] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:543; }","duration":"110.5082ms","start":"2022-10-25T01:48:59.219Z","end":"2022-10-25T01:48:59.330Z","steps":["trace[917184062] 'agreement among raft nodes before linearized reading'  (duration: 110.1891ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.6357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.513Z","caller":"traceutil/trace.go:171","msg":"trace[1314684674] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"114.757ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.513Z","steps":["trace[1314684674] 'agreement among raft nodes before linearized reading'  (duration: 114.5126ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.513Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.1933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.514Z","caller":"traceutil/trace.go:171","msg":"trace[672257262] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"115.345ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.514Z","steps":["trace[672257262] 'agreement among raft nodes before linearized reading'  (duration: 115.1413ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.514Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.514Z","caller":"traceutil/trace.go:171","msg":"trace[1301735318] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"115.6417ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.514Z","steps":["trace[1301735318] 'agreement among raft nodes before linearized reading'  (duration: 115.5513ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.515Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.3715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-014519\" ","response":"range_response_count:1 size:4574"}
	{"level":"info","ts":"2022-10-25T01:48:59.515Z","caller":"traceutil/trace.go:171","msg":"trace[2093128474] range","detail":"{range_begin:/registry/minions/newest-cni-014519; range_end:; response_count:1; response_revision:553; }","duration":"116.5826ms","start":"2022-10-25T01:48:59.398Z","end":"2022-10-25T01:48:59.515Z","steps":["trace[2093128474] 'agreement among raft nodes before linearized reading'  (duration: 116.2474ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.811Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.5322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89\" ","response":"range_response_count:1 size:3143"}
	{"level":"info","ts":"2022-10-25T01:48:59.811Z","caller":"traceutil/trace.go:171","msg":"trace[1598526374] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89; range_end:; response_count:1; response_revision:559; }","duration":"110.6691ms","start":"2022-10-25T01:48:59.701Z","end":"2022-10-25T01:48:59.811Z","steps":["trace[1598526374] 'agreement among raft nodes before linearized reading'  (duration: 110.4538ms)"],"step_count":1}
	{"level":"warn","ts":"2022-10-25T01:48:59.811Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.1391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2022-10-25T01:48:59.812Z","caller":"traceutil/trace.go:171","msg":"trace[1074079936] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:559; }","duration":"129.4308ms","start":"2022-10-25T01:48:59.682Z","end":"2022-10-25T01:48:59.812Z","steps":["trace[1074079936] 'agreement among raft nodes before linearized reading'  (duration: 129.0877ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:49:40 up  1:55,  0 users,  load average: 15.30, 11.17, 8.10
	Linux newest-cni-014519 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [40651d26ca2a] <==
	* I1025 01:48:44.011708       1 trace.go:205] Trace[606057151]: "Create" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:0a5f1a4d-6897-4004-a583-9ec790693925,client:172.17.0.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:48:43.485) (total time: 525ms):
	Trace[606057151]: ---"Write to database call finished" len:234,err:<nil> 525ms (01:48:44.011)
	Trace[606057151]: [525.8447ms] [525.8447ms] END
	I1025 01:48:44.034060       1 trace.go:205] Trace[2146592351]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.25.3 (linux/amd64) kubernetes/434bfd8,audit-id:386cad93-3278-4308-888f-7a260461a76a,client:172.17.0.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (25-Oct-2022 01:48:43.492) (total time: 541ms):
	Trace[2146592351]: ---"Write to database call finished" len:2567,err:nodes "newest-cni-014519" already exists 541ms (01:48:44.033)
	Trace[2146592351]: [541.9736ms] [541.9736ms] END
	I1025 01:48:44.451136       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1025 01:48:44.792494       1 handler_proxy.go:105] no RequestInfo found in the context
	E1025 01:48:44.792644       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1025 01:48:44.792664       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 01:48:44.792686       1 handler_proxy.go:105] no RequestInfo found in the context
	E1025 01:48:44.792808       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 01:48:44.793868       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1025 01:48:46.401647       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1025 01:48:46.512540       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1025 01:48:47.019522       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1025 01:48:47.482203       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 01:48:47.676380       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 01:48:54.081307       1 controller.go:616] quota admission added evaluator for: namespaces
	I1025 01:48:55.017969       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.254.55]
	I1025 01:48:55.112707       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.194.145]
	I1025 01:48:58.992865       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I1025 01:48:59.097235       1 controller.go:616] quota admission added evaluator for: endpoints
	I1025 01:48:59.180954       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7ff17fe11390] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:48:03.587988       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:48:03.588314       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 01:48:03.588321       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [97497f2af52a] <==
	* I1025 01:47:49.274522       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:47:49.274539       1 shared_informer.go:262] Caches are synced for node
	I1025 01:47:49.274586       1 range_allocator.go:166] Starting range CIDR allocator
	I1025 01:47:49.274597       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1025 01:47:49.274603       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1025 01:47:49.274617       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1025 01:47:49.274778       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	W1025 01:47:49.274798       1 node_lifecycle_controller.go:1058] Missing timestamp for Node newest-cni-014519. Assuming now as a timestamp.
	I1025 01:47:49.274835       1 taint_manager.go:209] "Sending events to api server"
	I1025 01:47:49.274859       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1025 01:47:49.275089       1 event.go:294] "Event occurred" object="newest-cni-014519" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-014519 event: Registered Node newest-cni-014519 in Controller"
	I1025 01:47:49.275147       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 01:47:49.275162       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 01:47:49.279657       1 shared_informer.go:262] Caches are synced for TTL
	I1025 01:47:49.302808       1 range_allocator.go:367] Set node newest-cni-014519 PodCIDR to [192.168.0.0/24]
	I1025 01:47:49.595933       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:47:49.680251       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:47:49.680481       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:47:49.778250       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-md6qs"
	I1025 01:47:49.800938       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-48b4v"
	I1025 01:47:49.977830       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f8b8r"
	I1025 01:47:50.257093       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I1025 01:47:50.284585       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-md6qs"
	I1025 01:47:59.476658       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c8fd5cf8 to 1"
	I1025 01:47:59.512661       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c8fd5cf8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c8fd5cf8-njrkj"
	
	* 
	* ==> kube-controller-manager [d4b7a7ce03a2] <==
	* I1025 01:48:58.844422       1 shared_informer.go:262] Caches are synced for PV protection
	I1025 01:48:58.844503       1 shared_informer.go:262] Caches are synced for disruption
	I1025 01:48:58.844542       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1025 01:48:58.844756       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1025 01:48:58.844828       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1025 01:48:58.847299       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1025 01:48:58.847521       1 shared_informer.go:262] Caches are synced for ephemeral
	I1025 01:48:58.874648       1 shared_informer.go:262] Caches are synced for stateful set
	I1025 01:48:58.876372       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	E1025 01:48:58.898070       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1025 01:48:58.898735       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 01:48:58.903708       1 shared_informer.go:262] Caches are synced for resource quota
	E1025 01:48:58.907218       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1025 01:48:58.913386       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I1025 01:48:58.975433       1 shared_informer.go:262] Caches are synced for endpoint
	I1025 01:48:58.981713       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 01:48:58.992710       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1025 01:48:58.994908       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1025 01:48:59.020359       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-7b94984548 to 1"
	I1025 01:48:59.079019       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-57bbdc5f89 to 1"
	I1025 01:48:59.304959       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-57bbdc5f89" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-57bbdc5f89-x7jd6"
	I1025 01:48:59.314150       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:48:59.329038       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 01:48:59.329079       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 01:48:59.383316       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-pxbgw"
	
	* 
	* ==> kube-proxy [47738d6c8227] <==
	* I1025 01:48:49.101952       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:48:49.108200       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:48:49.111912       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:48:49.179016       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:48:49.196472       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1025 01:48:49.218109       1 node.go:163] Successfully retrieved node IP: 172.17.0.2
	I1025 01:48:49.218281       1 server_others.go:138] "Detected node IP" address="172.17.0.2"
	I1025 01:48:49.218323       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 01:48:49.575598       1 server_others.go:206] "Using iptables Proxier"
	I1025 01:48:49.575658       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1025 01:48:49.575679       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1025 01:48:49.575787       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1025 01:48:49.575874       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:48:49.576518       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:48:49.577336       1 server.go:661] "Version info" version="v1.25.3"
	I1025 01:48:49.578097       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:48:49.579232       1 config.go:444] "Starting node config controller"
	I1025 01:48:49.579531       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 01:48:49.581453       1 config.go:226] "Starting endpoint slice config controller"
	I1025 01:48:49.581604       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 01:48:49.592047       1 config.go:317] "Starting service config controller"
	I1025 01:48:49.592080       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 01:48:49.691660       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 01:48:49.699487       1 shared_informer.go:262] Caches are synced for service config
	I1025 01:48:49.780438       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [e4edba2e7556] <==
	* I1025 01:47:58.894397       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I1025 01:47:58.897668       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I1025 01:47:58.901134       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I1025 01:47:58.973626       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I1025 01:47:58.977374       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I1025 01:47:59.007438       1 node.go:163] Successfully retrieved node IP: 172.17.0.2
	I1025 01:47:59.007649       1 server_others.go:138] "Detected node IP" address="172.17.0.2"
	I1025 01:47:59.008005       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 01:47:59.182428       1 server_others.go:206] "Using iptables Proxier"
	I1025 01:47:59.182594       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1025 01:47:59.182617       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1025 01:47:59.182644       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1025 01:47:59.182681       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:47:59.183241       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 01:47:59.183928       1 server.go:661] "Version info" version="v1.25.3"
	I1025 01:47:59.183973       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:47:59.184916       1 config.go:317] "Starting service config controller"
	I1025 01:47:59.184968       1 config.go:226] "Starting endpoint slice config controller"
	I1025 01:47:59.184990       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 01:47:59.184997       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 01:47:59.185206       1 config.go:444] "Starting node config controller"
	I1025 01:47:59.185226       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 01:47:59.286329       1 shared_informer.go:262] Caches are synced for service config
	I1025 01:47:59.286808       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 01:47:59.287319       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d543071ca09f] <==
	* W1025 01:47:24.322852       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 01:47:24.323004       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 01:47:24.616209       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 01:47:24.616347       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 01:47:24.716471       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 01:47:24.716641       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 01:47:24.964104       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 01:47:24.964225       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 01:47:25.898571       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 01:47:25.898697       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 01:47:26.430999       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 01:47:26.431181       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 01:47:30.275990       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 01:47:30.276120       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 01:47:32.076408       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 01:47:32.076536       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 01:47:32.389222       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 01:47:32.389352       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 01:47:34.180577       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 01:47:34.180723       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 01:47:51.891214       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:48:02.375282       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E1025 01:48:02.375457       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E1025 01:48:02.375475       1 run.go:74] "command failed" err="finished without leader elect"
	I1025 01:48:02.375529       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [d728c043f9f7] <==
	* W1025 01:48:34.387403       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I1025 01:48:36.720428       1 serving.go:348] Generated self-signed cert in-memory
	W1025 01:48:43.592533       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 01:48:43.592580       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 01:48:43.592601       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 01:48:43.592618       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 01:48:43.696239       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1025 01:48:43.696400       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 01:48:43.700970       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 01:48:43.701257       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 01:48:43.701279       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 01:48:43.702533       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 01:48:43.802465       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-10-25 01:48:11 UTC, end at Tue 2022-10-25 01:49:41 UTC. --
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         rpc error: code = Unknown desc = [failed to set up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to set up pod "coredns-565d847f94-48b4v_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to teardown pod "coredns-565d847f94-48b4v_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: "crio" id: "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         Try `iptables -h' or 'iptables --help' for more information.
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         ]
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:  >
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]: E1025 01:49:00.645763    1203 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=<
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         rpc error: code = Unknown desc = [failed to set up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to set up pod "coredns-565d847f94-48b4v_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to teardown pod "coredns-565d847f94-48b4v_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: "crio" id: "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         Try `iptables -h' or 'iptables --help' for more information.
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         ]
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:  > pod="kube-system/coredns-565d847f94-48b4v"
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]: E1025 01:49:00.645800    1203 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err=<
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         rpc error: code = Unknown desc = [failed to set up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to set up pod "coredns-565d847f94-48b4v_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" network for pod "coredns-565d847f94-48b4v": networkPlugin cni failed to teardown pod "coredns-565d847f94-48b4v_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: "crio" id: "56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         Try `iptables -h' or 'iptables --help' for more information.
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:         ]
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]:  > pod="kube-system/coredns-565d847f94-48b4v"
	Oct 25 01:49:00 newest-cni-014519 kubelet[1203]: E1025 01:49:00.646026    1203 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-565d847f94-48b4v_kube-system(cfcce53d-c202-4d51-91e2-4504a0b7ab56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-565d847f94-48b4v_kube-system(cfcce53d-c202-4d51-91e2-4504a0b7ab56)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00\\\" network for pod \\\"coredns-565d847f94-48b4v\\\": networkPlugin cni failed to set up pod \\\"coredns-565d847f94-48b4v_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00\\\" network for pod \\\"coredns-565d847f94-48b4v\\\": networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-48b4v_kube-system\\\" n
etwork: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-013276935d946a4db99e3e05 -m comment --comment name: \\\"crio\\\" id: \\\"56db266150fbe0da98fe2b1639d4134367e5acec109b563c1986fcd934d83f00\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-013276935d946a4db99e3e05':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-565d847f94-48b4v" podUID=cfcce53d-c202-4d51-91e2-4504a0b7ab56
	Oct 25 01:49:01 newest-cni-014519 kubelet[1203]: I1025 01:49:01.720166    1203 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2b5aa6cd6bf285814968ffe5ad9b5aeabfc8efb36c7c2d3d0782526b08f9615a"
	Oct 25 01:49:02 newest-cni-014519 kubelet[1203]: I1025 01:49:02.353449    1203 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d7d47d6175b20dd0398269055aba4f03a47477527f8d5df6b5885dd1e11f02e5"
	Oct 25 01:49:02 newest-cni-014519 kubelet[1203]: I1025 01:49:02.483716    1203 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3ee97de7a1ce626673af734ff712924242e2d515fabd82803f3ca71b42ee152a"
	Oct 25 01:49:03 newest-cni-014519 kubelet[1203]: I1025 01:49:03.008044    1203 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 01:49:03 newest-cni-014519 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Oct 25 01:49:03 newest-cni-014519 systemd[1]: kubelet.service: Succeeded.
	Oct 25 01:49:03 newest-cni-014519 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [8c32d977d6a4] <==
	* I1025 01:47:58.687763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	* 
	* ==> storage-provisioner [a2108138e300] <==
	* I1025 01:48:49.091272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 01:49:40.346475   10136 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-014519 -n newest-cni-014519
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-014519 -n newest-cni-014519: exit status 2 (1.9229205s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "newest-cni-014519" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (43.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (330.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-012957 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker
E1025 01:49:56.764185    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:50:37.739538    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:51:07.675712    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:51:13.517607    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:51:26.553368    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 01:51:59.664733    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:53:03.396410    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 01:53:23.699225    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:53:29.567538    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:53:36.033209    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:36.048008    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:36.063723    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:36.094975    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:36.141477    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:36.235396    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:36.395986    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:36.723405    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:37.372085    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:38.661301    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:41.224540    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:46.353129    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:51.521776    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:53:56.607088    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:53:57.362413    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:54:11.598227    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 01:54:15.698109    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 01:54:17.090949    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-012957 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 80 (5m29.9494411s)

                                                
                                                
-- stdout --
	* [kindnet-012957] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kindnet-012957 in cluster kindnet-012957
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 01:49:56.298999    9496 out.go:296] Setting OutFile to fd 1512 ...
	I1025 01:49:56.358216    9496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:49:56.358216    9496 out.go:309] Setting ErrFile to fd 2024...
	I1025 01:49:56.358216    9496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:49:56.379225    9496 out.go:303] Setting JSON to false
	I1025 01:49:56.381229    9496 start.go:116] hostinfo: {"hostname":"minikube8","uptime":12240,"bootTime":1666650356,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 01:49:56.382240    9496 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 01:49:56.392231    9496 out.go:177] * [kindnet-012957] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 01:49:56.397776    9496 notify.go:220] Checking for updates...
	I1025 01:49:56.400966    9496 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:49:56.405409    9496 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 01:49:56.411358    9496 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 01:49:56.416399    9496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 01:49:56.420346    9496 config.go:180] Loaded profile config "calico-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:49:56.421348    9496 config.go:180] Loaded profile config "cilium-012958": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:49:56.421551    9496 config.go:180] Loaded profile config "false-012957": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:49:56.421838    9496 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 01:49:56.717058    9496 docker.go:137] docker version: linux-20.10.17
	I1025 01:49:56.725392    9496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:49:57.303088    9496 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:77 OomKillDisable:true NGoroutines:60 SystemTime:2022-10-25 01:49:56.9044095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:49:57.443579    9496 out.go:177] * Using the docker driver based on user configuration
	I1025 01:49:57.447593    9496 start.go:282] selected driver: docker
	I1025 01:49:57.447593    9496 start.go:808] validating driver "docker" against <nil>
	I1025 01:49:57.448076    9496 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 01:49:57.535570    9496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:49:58.164322    9496 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:82 OomKillDisable:true NGoroutines:64 SystemTime:2022-10-25 01:49:57.7211527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:49:58.164322    9496 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1025 01:49:58.165323    9496 start_flags.go:885] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 01:49:58.167322    9496 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 01:49:58.169374    9496 cni.go:95] Creating CNI manager for "kindnet"
	I1025 01:49:58.169374    9496 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 01:49:58.169374    9496 start_flags.go:317] config:
	{Name:kindnet-012957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kindnet-012957 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:49:58.172329    9496 out.go:177] * Starting control plane node kindnet-012957 in cluster kindnet-012957
	I1025 01:49:58.174323    9496 cache.go:120] Beginning downloading kic base image for docker with docker
	I1025 01:49:58.177329    9496 out.go:177] * Pulling base image ...
	I1025 01:49:58.180342    9496 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:49:58.180342    9496 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1025 01:49:58.180342    9496 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1025 01:49:58.181336    9496 cache.go:57] Caching tarball of preloaded images
	I1025 01:49:58.181336    9496 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 01:49:58.181336    9496 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1025 01:49:58.181336    9496 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\config.json ...
	I1025 01:49:58.181336    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\config.json: {Name:mke7f39e9e56f0dfe71bf9f649979ebd44f17abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:49:58.416768    9496 image.go:86] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon, skipping pull
	I1025 01:49:58.416768    9496 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in daemon, skipping load
	I1025 01:49:58.416768    9496 cache.go:208] Successfully downloaded all kic artifacts
	I1025 01:49:58.416768    9496 start.go:364] acquiring machines lock for kindnet-012957: {Name:mk25c297826c854d346a2a49d48d06a0944c3a37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 01:49:58.416768    9496 start.go:368] acquired machines lock for "kindnet-012957" in 0s
	I1025 01:49:58.416768    9496 start.go:93] Provisioning new machine with config: &{Name:kindnet-012957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kindnet-012957 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:49:58.416768    9496 start.go:125] createHost starting for "" (driver="docker")
	I1025 01:49:58.420780    9496 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1025 01:49:58.421770    9496 start.go:159] libmachine.API.Create for "kindnet-012957" (driver="docker")
	I1025 01:49:58.421770    9496 client.go:168] LocalClient.Create starting
	I1025 01:49:58.421770    9496 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I1025 01:49:58.421770    9496 main.go:134] libmachine: Decoding PEM data...
	I1025 01:49:58.421770    9496 main.go:134] libmachine: Parsing certificate...
	I1025 01:49:58.422767    9496 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I1025 01:49:58.422767    9496 main.go:134] libmachine: Decoding PEM data...
	I1025 01:49:58.422767    9496 main.go:134] libmachine: Parsing certificate...
	I1025 01:49:58.431773    9496 cli_runner.go:164] Run: docker network inspect kindnet-012957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 01:49:58.653061    9496 cli_runner.go:211] docker network inspect kindnet-012957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 01:49:58.660065    9496 network_create.go:272] running [docker network inspect kindnet-012957] to gather additional debugging logs...
	I1025 01:49:58.660065    9496 cli_runner.go:164] Run: docker network inspect kindnet-012957
	W1025 01:49:58.876087    9496 cli_runner.go:211] docker network inspect kindnet-012957 returned with exit code 1
	I1025 01:49:58.876087    9496 network_create.go:275] error running [docker network inspect kindnet-012957]: docker network inspect kindnet-012957: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-012957
	I1025 01:49:58.876087    9496 network_create.go:277] output of [docker network inspect kindnet-012957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-012957
	
	** /stderr **
	I1025 01:49:58.891092    9496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 01:49:59.123085    9496 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000aa6390] misses:0}
	I1025 01:49:59.123085    9496 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:59.123085    9496 network_create.go:115] attempt to create docker network kindnet-012957 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 01:49:59.130079    9496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957
	W1025 01:49:59.358421    9496 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957 returned with exit code 1
	W1025 01:49:59.359561    9496 network_create.go:107] failed to create docker network kindnet-012957 192.168.49.0/24, will retry: subnet is taken
	I1025 01:49:59.386456    9496 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:false}} dirty:map[] misses:0}
	I1025 01:49:59.386456    9496 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:59.418426    9496 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:true}} dirty:map[192.168.49.0:0xc000aa6390 192.168.58.0:0xc00000a678] misses:0}
	I1025 01:49:59.418426    9496 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:59.419751    9496 network_create.go:115] attempt to create docker network kindnet-012957 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1025 01:49:59.431438    9496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957
	W1025 01:49:59.660769    9496 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957 returned with exit code 1
	W1025 01:49:59.660796    9496 network_create.go:107] failed to create docker network kindnet-012957 192.168.58.0/24, will retry: subnet is taken
	I1025 01:49:59.688682    9496 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:true}} dirty:map[192.168.49.0:0xc000aa6390 192.168.58.0:0xc00000a678] misses:1}
	I1025 01:49:59.689659    9496 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:59.722656    9496 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:true}} dirty:map[192.168.49.0:0xc000aa6390 192.168.58.0:0xc00000a678 192.168.67.0:0xc00014abb0] misses:1}
	I1025 01:49:59.722656    9496 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:49:59.722656    9496 network_create.go:115] attempt to create docker network kindnet-012957 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1025 01:49:59.734644    9496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957
	W1025 01:49:59.999254    9496 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957 returned with exit code 1
	W1025 01:49:59.999762    9496 network_create.go:107] failed to create docker network kindnet-012957 192.168.67.0/24, will retry: subnet is taken
	I1025 01:50:00.021663    9496 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:true}} dirty:map[192.168.49.0:0xc000aa6390 192.168.58.0:0xc00000a678 192.168.67.0:0xc00014abb0] misses:2}
	I1025 01:50:00.022689    9496 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:50:00.043639    9496 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:true}} dirty:map[192.168.49.0:0xc000aa6390 192.168.58.0:0xc00000a678 192.168.67.0:0xc00014abb0 192.168.76.0:0xc000a3c350] misses:2}
	I1025 01:50:00.043639    9496 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:50:00.043639    9496 network_create.go:115] attempt to create docker network kindnet-012957 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 01:50:00.051646    9496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957
	W1025 01:50:00.254004    9496 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957 returned with exit code 1
	W1025 01:50:00.254004    9496 network_create.go:107] failed to create docker network kindnet-012957 192.168.76.0/24, will retry: subnet is taken
	I1025 01:50:00.272004    9496 network.go:286] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:true}} dirty:map[192.168.49.0:0xc000aa6390 192.168.58.0:0xc00000a678 192.168.67.0:0xc00014abb0 192.168.76.0:0xc000a3c350] misses:3}
	I1025 01:50:00.272004    9496 network.go:244] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:50:00.301033    9496 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000aa6390] amended:true}} dirty:map[192.168.49.0:0xc000aa6390 192.168.58.0:0xc00000a678 192.168.67.0:0xc00014abb0 192.168.76.0:0xc000a3c350 192.168.85.0:0xc000a3c4c0] misses:3}
	I1025 01:50:00.302009    9496 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1025 01:50:00.302009    9496 network_create.go:115] attempt to create docker network kindnet-012957 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 01:50:00.312025    9496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957
	W1025 01:50:00.539373    9496 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-012957 kindnet-012957 returned with exit code 1
	W1025 01:50:00.539373    9496 network_create.go:107] failed to create docker network kindnet-012957 192.168.85.0/24, will retry: subnet is taken
	W1025 01:50:00.539373    9496 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create docker network kindnet-012957: subnet is taken
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create docker network kindnet-012957: subnet is taken
	I1025 01:50:00.557402    9496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 01:50:00.819478    9496 cli_runner.go:164] Run: docker volume create kindnet-012957 --label name.minikube.sigs.k8s.io=kindnet-012957 --label created_by.minikube.sigs.k8s.io=true
	I1025 01:50:01.031001    9496 oci.go:103] Successfully created a docker volume kindnet-012957
	I1025 01:50:01.045992    9496 cli_runner.go:164] Run: docker run --rm --name kindnet-012957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-012957 --entrypoint /usr/bin/test -v kindnet-012957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib
	I1025 01:50:03.175499    9496 cli_runner.go:217] Completed: docker run --rm --name kindnet-012957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-012957 --entrypoint /usr/bin/test -v kindnet-012957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -d /var/lib: (2.1292716s)
	I1025 01:50:03.175653    9496 oci.go:107] Successfully prepared a docker volume kindnet-012957
	I1025 01:50:03.175653    9496 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:50:03.175785    9496 kic.go:179] Starting extracting preloaded images to volume ...
	I1025 01:50:03.193773    9496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-012957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 01:50:28.121844    9496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-012957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 -I lz4 -xf /preloaded.tar -C /extractDir: (24.9278969s)
	I1025 01:50:28.121844    9496 kic.go:188] duration metric: took 24.945884 seconds to extract preloaded images to volume
	I1025 01:50:28.128824    9496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 01:50:28.687200    9496 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-10-25 01:50:28.2936743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 01:50:28.696710    9496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 01:50:29.345691    9496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-012957 --name kindnet-012957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-012957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-012957 --volume kindnet-012957:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191
	I1025 01:50:30.658814    9496 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-012957 --name kindnet-012957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-012957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-012957 --volume kindnet-012957:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191: (1.3129914s)
	I1025 01:50:30.670435    9496 cli_runner.go:164] Run: docker container inspect kindnet-012957 --format={{.State.Running}}
	I1025 01:50:30.903168    9496 cli_runner.go:164] Run: docker container inspect kindnet-012957 --format={{.State.Status}}
	I1025 01:50:31.159312    9496 cli_runner.go:164] Run: docker exec kindnet-012957 stat /var/lib/dpkg/alternatives/iptables
	I1025 01:50:31.543516    9496 oci.go:144] the created container "kindnet-012957" has a running status.
	I1025 01:50:31.543516    9496 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa...
	I1025 01:50:32.068946    9496 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 01:50:32.427410    9496 cli_runner.go:164] Run: docker container inspect kindnet-012957 --format={{.State.Status}}
	I1025 01:50:32.637954    9496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 01:50:32.637954    9496 kic_runner.go:114] Args: [docker exec --privileged kindnet-012957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 01:50:33.073104    9496 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa...
	I1025 01:50:33.665604    9496 cli_runner.go:164] Run: docker container inspect kindnet-012957 --format={{.State.Status}}
	I1025 01:50:33.933130    9496 machine.go:88] provisioning docker machine ...
	I1025 01:50:33.933130    9496 ubuntu.go:169] provisioning hostname "kindnet-012957"
	I1025 01:50:33.940129    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:34.165064    9496 main.go:134] libmachine: Using SSH client type: native
	I1025 01:50:34.172066    9496 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50691 <nil> <nil>}
	I1025 01:50:34.172066    9496 main.go:134] libmachine: About to run SSH command:
	sudo hostname kindnet-012957 && echo "kindnet-012957" | sudo tee /etc/hostname
	I1025 01:50:34.442194    9496 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-012957
	
	I1025 01:50:34.451204    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:34.699301    9496 main.go:134] libmachine: Using SSH client type: native
	I1025 01:50:34.700308    9496 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50691 <nil> <nil>}
	I1025 01:50:34.700308    9496 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-012957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-012957/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-012957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 01:50:34.911867    9496 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1025 01:50:34.911867    9496 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I1025 01:50:34.911867    9496 ubuntu.go:177] setting up certificates
	I1025 01:50:34.911867    9496 provision.go:83] configureAuth start
	I1025 01:50:34.921562    9496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-012957
	I1025 01:50:35.149575    9496 provision.go:138] copyHostCerts
	I1025 01:50:35.149575    9496 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I1025 01:50:35.149575    9496 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I1025 01:50:35.150573    9496 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1025 01:50:35.151562    9496 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I1025 01:50:35.151562    9496 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I1025 01:50:35.151562    9496 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1025 01:50:35.152562    9496 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I1025 01:50:35.152562    9496 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I1025 01:50:35.152562    9496 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1675 bytes)
	I1025 01:50:35.153568    9496 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-012957 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-012957]
	I1025 01:50:35.604420    9496 provision.go:172] copyRemoteCerts
	I1025 01:50:35.619514    9496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 01:50:35.630205    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:35.857302    9496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50691 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa Username:docker}
	I1025 01:50:36.008430    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 01:50:36.075209    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 01:50:36.141664    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 01:50:36.200765    9496 provision.go:86] duration metric: configureAuth took 1.2888893s
	I1025 01:50:36.200902    9496 ubuntu.go:193] setting minikube options for container-runtime
	I1025 01:50:36.201054    9496 config.go:180] Loaded profile config "kindnet-012957": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:50:36.212758    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:36.472529    9496 main.go:134] libmachine: Using SSH client type: native
	I1025 01:50:36.473516    9496 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50691 <nil> <nil>}
	I1025 01:50:36.473516    9496 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 01:50:36.684797    9496 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 01:50:36.684797    9496 ubuntu.go:71] root file system type: overlay
	I1025 01:50:36.684797    9496 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 01:50:36.692776    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:36.913656    9496 main.go:134] libmachine: Using SSH client type: native
	I1025 01:50:36.913656    9496 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50691 <nil> <nil>}
	I1025 01:50:36.913656    9496 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 01:50:37.250674    9496 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 01:50:37.257663    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:37.495119    9496 main.go:134] libmachine: Using SSH client type: native
	I1025 01:50:37.495119    9496 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0xfdbca0] 0xfdec20 <nil>  [] 0s} 127.0.0.1 50691 <nil> <nil>}
	I1025 01:50:37.496116    9496 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 01:50:39.141850    9496 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-09-08 23:09:37.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-10-25 01:50:37.231441000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 01:50:39.141914    9496 machine.go:91] provisioned docker machine in 5.2087478s
	I1025 01:50:39.141960    9496 client.go:171] LocalClient.Create took 40.7198588s
	I1025 01:50:39.141960    9496 start.go:167] duration metric: libmachine.API.Create for "kindnet-012957" took 40.7199042s
	I1025 01:50:39.142007    9496 start.go:300] post-start starting for "kindnet-012957" (driver="docker")
	I1025 01:50:39.142007    9496 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 01:50:39.157862    9496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 01:50:39.165860    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:39.413422    9496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50691 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa Username:docker}
	I1025 01:50:39.556843    9496 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 01:50:39.573859    9496 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 01:50:39.573859    9496 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 01:50:39.573859    9496 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 01:50:39.573859    9496 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1025 01:50:39.573859    9496 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I1025 01:50:39.574687    9496 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I1025 01:50:39.576155    9496 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem -> 42002.pem in /etc/ssl/certs
	I1025 01:50:39.600744    9496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 01:50:39.625985    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /etc/ssl/certs/42002.pem (1708 bytes)
	I1025 01:50:39.684356    9496 start.go:303] post-start completed in 542.3449ms
	I1025 01:50:39.702617    9496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-012957
	I1025 01:50:39.944265    9496 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\config.json ...
	I1025 01:50:39.958180    9496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:50:39.964661    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:40.182034    9496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50691 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa Username:docker}
	I1025 01:50:40.329696    9496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 01:50:40.342586    9496 start.go:128] duration metric: createHost completed in 41.9255238s
	I1025 01:50:40.342586    9496 start.go:83] releasing machines lock for "kindnet-012957", held for 41.9255238s
	I1025 01:50:40.349626    9496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-012957
	I1025 01:50:40.581175    9496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 01:50:40.598166    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:40.601155    9496 ssh_runner.go:195] Run: systemctl --version
	I1025 01:50:40.616136    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:40.830146    9496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50691 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa Username:docker}
	I1025 01:50:40.845140    9496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50691 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa Username:docker}
	I1025 01:50:40.935720    9496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 01:50:41.032315    9496 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1025 01:50:41.089078    9496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:50:41.306905    9496 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 01:50:41.497950    9496 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 01:50:41.528467    9496 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1025 01:50:41.537775    9496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 01:50:41.564065    9496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 01:50:41.626081    9496 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 01:50:41.802131    9496 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 01:50:41.984991    9496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:50:42.152811    9496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 01:50:42.743856    9496 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 01:50:42.969990    9496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 01:50:43.166707    9496 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1025 01:50:43.204777    9496 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 01:50:43.222745    9496 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 01:50:43.233744    9496 start.go:472] Will wait 60s for crictl version
	I1025 01:50:43.242738    9496 ssh_runner.go:195] Run: sudo crictl version
	I1025 01:50:43.325389    9496 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.18
	RuntimeApiVersion:  1.41.0
	I1025 01:50:43.332393    9496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:50:43.416608    9496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 01:50:43.480318    9496 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.18 ...
	I1025 01:50:43.494062    9496 cli_runner.go:164] Run: docker exec -t kindnet-012957 dig +short host.docker.internal
	I1025 01:50:43.913143    9496 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1025 01:50:43.923381    9496 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1025 01:50:43.935362    9496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:50:43.973041    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:50:44.196712    9496 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1025 01:50:44.214726    9496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:50:44.281240    9496 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:50:44.281240    9496 docker.go:542] Images already preloaded, skipping extraction
	I1025 01:50:44.297670    9496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 01:50:44.355427    9496 docker.go:611] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 01:50:44.356431    9496 cache_images.go:84] Images are preloaded, skipping loading
	I1025 01:50:44.363418    9496 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 01:50:44.533287    9496 cni.go:95] Creating CNI manager for "kindnet"
	I1025 01:50:44.533287    9496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 01:50:44.533287    9496 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-012957 NodeName:kindnet-012957 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1025 01:50:44.533287    9496 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kindnet-012957"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 01:50:44.533287    9496 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kindnet-012957 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kindnet-012957 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I1025 01:50:44.543290    9496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1025 01:50:44.564551    9496 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 01:50:44.578781    9496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 01:50:44.603380    9496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (474 bytes)
	I1025 01:50:44.644759    9496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 01:50:44.686691    9496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2031 bytes)
	I1025 01:50:44.733682    9496 ssh_runner.go:195] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	I1025 01:50:44.743716    9496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 01:50:44.774049    9496 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957 for IP: 172.17.0.2
	I1025 01:50:44.775031    9496 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I1025 01:50:44.775031    9496 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I1025 01:50:44.776047    9496 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\client.key
	I1025 01:50:44.776047    9496 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\client.crt with IP's: []
	I1025 01:50:44.975115    9496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\client.crt ...
	I1025 01:50:44.975115    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\client.crt: {Name:mk38a1b0a9fff6dbada506bc0e399da59ca1d067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:50:44.977965    9496 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\client.key ...
	I1025 01:50:44.978133    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\client.key: {Name:mk6a88804da98e72b1e79f4d9b0b1c8f9aebe522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:50:44.979693    9496 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.key.7b749c5f
	I1025 01:50:44.980120    9496 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 01:50:45.148575    9496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.crt.7b749c5f ...
	I1025 01:50:45.148575    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.crt.7b749c5f: {Name:mk87c23bb7564de513348ad223285cc8e44b33ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:50:45.149555    9496 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.key.7b749c5f ...
	I1025 01:50:45.149555    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.key.7b749c5f: {Name:mkcfe1ea2ec095af8b3df4407ef2b4610d11f219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:50:45.150683    9496 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.crt.7b749c5f -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.crt
	I1025 01:50:45.156715    9496 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.key.7b749c5f -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.key
	I1025 01:50:45.159712    9496 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.key
	I1025 01:50:45.159712    9496 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.crt with IP's: []
	I1025 01:50:45.354859    9496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.crt ...
	I1025 01:50:45.354859    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.crt: {Name:mk3c9e120ade4ba9373b52ddc4a5c77960922e7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:50:45.356628    9496 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.key ...
	I1025 01:50:45.356628    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.key: {Name:mk00929e021ead72b2c0a01c54e5c0dd256c3f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:50:45.363672    9496 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem (1338 bytes)
	W1025 01:50:45.364237    9496 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200_empty.pem, impossibly tiny 0 bytes
	I1025 01:50:45.364237    9496 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1025 01:50:45.364551    9496 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1025 01:50:45.364764    9496 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1025 01:50:45.364988    9496 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1025 01:50:45.365215    9496 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem (1708 bytes)
	I1025 01:50:45.374350    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 01:50:45.447805    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 01:50:45.511539    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 01:50:45.567056    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-012957\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 01:50:45.632488    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 01:50:45.685222    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 01:50:45.738138    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 01:50:45.793905    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 01:50:45.865774    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\42002.pem --> /usr/share/ca-certificates/42002.pem (1708 bytes)
	I1025 01:50:45.924792    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 01:50:45.980438    9496 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\4200.pem --> /usr/share/ca-certificates/4200.pem (1338 bytes)
	I1025 01:50:46.034001    9496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 01:50:46.078013    9496 ssh_runner.go:195] Run: openssl version
	I1025 01:50:46.108990    9496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42002.pem && ln -fs /usr/share/ca-certificates/42002.pem /etc/ssl/certs/42002.pem"
	I1025 01:50:46.145466    9496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42002.pem
	I1025 01:50:46.156457    9496 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 25 00:08 /usr/share/ca-certificates/42002.pem
	I1025 01:50:46.165460    9496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42002.pem
	I1025 01:50:46.202326    9496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42002.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 01:50:46.244339    9496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 01:50:46.282340    9496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:50:46.295340    9496 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 25 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:50:46.306334    9496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 01:50:46.341347    9496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 01:50:46.382797    9496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4200.pem && ln -fs /usr/share/ca-certificates/4200.pem /etc/ssl/certs/4200.pem"
	I1025 01:50:46.432322    9496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4200.pem
	I1025 01:50:46.448315    9496 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 25 00:08 /usr/share/ca-certificates/4200.pem
	I1025 01:50:46.460316    9496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4200.pem
	I1025 01:50:46.483329    9496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4200.pem /etc/ssl/certs/51391683.0"
	I1025 01:50:46.516526    9496 kubeadm.go:396] StartCluster: {Name:kindnet-012957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kindnet-012957 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 01:50:46.526527    9496 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 01:50:46.595797    9496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 01:50:46.631779    9496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 01:50:46.658867    9496 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1025 01:50:46.676245    9496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 01:50:46.700207    9496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 01:50:46.700481    9496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 01:50:46.797000    9496 kubeadm.go:317] W1025 01:50:46.793564    1209 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 01:50:46.876501    9496 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 01:50:47.141308    9496 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 01:51:11.683283    9496 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1025 01:51:11.683393    9496 kubeadm.go:317] [preflight] Running pre-flight checks
	I1025 01:51:11.683712    9496 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 01:51:11.684221    9496 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 01:51:11.684444    9496 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 01:51:11.684730    9496 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 01:51:11.687271    9496 out.go:204]   - Generating certificates and keys ...
	I1025 01:51:11.687271    9496 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1025 01:51:11.687815    9496 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1025 01:51:11.688298    9496 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 01:51:11.688534    9496 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1025 01:51:11.688826    9496 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1025 01:51:11.688951    9496 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1025 01:51:11.689168    9496 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1025 01:51:11.690270    9496 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [kindnet-012957 localhost] and IPs [172.17.0.2 127.0.0.1 ::1]
	I1025 01:51:11.690573    9496 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1025 01:51:11.691068    9496 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [kindnet-012957 localhost] and IPs [172.17.0.2 127.0.0.1 ::1]
	I1025 01:51:11.691356    9496 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 01:51:11.691658    9496 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 01:51:11.691859    9496 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1025 01:51:11.691950    9496 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 01:51:11.691950    9496 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 01:51:11.691950    9496 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 01:51:11.692772    9496 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 01:51:11.692962    9496 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 01:51:11.693024    9496 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 01:51:11.693024    9496 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 01:51:11.693024    9496 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1025 01:51:11.693747    9496 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 01:51:11.699792    9496 out.go:204]   - Booting up control plane ...
	I1025 01:51:11.699792    9496 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 01:51:11.699792    9496 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 01:51:11.700784    9496 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 01:51:11.700784    9496 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 01:51:11.700784    9496 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 01:51:11.701785    9496 kubeadm.go:317] [apiclient] All control plane components are healthy after 18.512688 seconds
	I1025 01:51:11.701785    9496 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 01:51:11.701785    9496 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 01:51:11.702803    9496 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 01:51:11.702803    9496 kubeadm.go:317] [mark-control-plane] Marking the node kindnet-012957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 01:51:11.702803    9496 kubeadm.go:317] [bootstrap-token] Using token: 1ki6gz.pcl31b5p1jy8dtyl
	I1025 01:51:11.706795    9496 out.go:204]   - Configuring RBAC rules ...
	I1025 01:51:11.706795    9496 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 01:51:11.706795    9496 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 01:51:11.707810    9496 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 01:51:11.707810    9496 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 01:51:11.707810    9496 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 01:51:11.708788    9496 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 01:51:11.708788    9496 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 01:51:11.708788    9496 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1025 01:51:11.708788    9496 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1025 01:51:11.708788    9496 kubeadm.go:317] 
	I1025 01:51:11.708788    9496 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1025 01:51:11.708788    9496 kubeadm.go:317] 
	I1025 01:51:11.709788    9496 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1025 01:51:11.709788    9496 kubeadm.go:317] 
	I1025 01:51:11.709788    9496 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1025 01:51:11.709788    9496 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 01:51:11.709788    9496 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 01:51:11.709788    9496 kubeadm.go:317] 
	I1025 01:51:11.709788    9496 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1025 01:51:11.709788    9496 kubeadm.go:317] 
	I1025 01:51:11.709788    9496 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 01:51:11.709788    9496 kubeadm.go:317] 
	I1025 01:51:11.710788    9496 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1025 01:51:11.710788    9496 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 01:51:11.710788    9496 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 01:51:11.710788    9496 kubeadm.go:317] 
	I1025 01:51:11.710788    9496 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 01:51:11.711794    9496 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1025 01:51:11.711794    9496 kubeadm.go:317] 
	I1025 01:51:11.711794    9496 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 1ki6gz.pcl31b5p1jy8dtyl \
	I1025 01:51:11.711794    9496 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 \
	I1025 01:51:11.712789    9496 kubeadm.go:317] 	--control-plane 
	I1025 01:51:11.712789    9496 kubeadm.go:317] 
	I1025 01:51:11.712789    9496 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1025 01:51:11.712789    9496 kubeadm.go:317] 
	I1025 01:51:11.712789    9496 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 1ki6gz.pcl31b5p1jy8dtyl \
	I1025 01:51:11.712789    9496 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:cfe7dd7a8e61587818260abb61477c9598aed0e51cc4d8006ee76bf98159c639 
	I1025 01:51:11.712789    9496 cni.go:95] Creating CNI manager for "kindnet"
	I1025 01:51:11.715805    9496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 01:51:11.728079    9496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 01:51:11.740075    9496 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1025 01:51:11.740075    9496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1025 01:51:11.912458    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 01:51:14.425807    9496 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.5132621s)
	I1025 01:51:14.426115    9496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 01:51:14.443272    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=e51468b57074bb26eb09785222979dd1e5fe9cd4 minikube.k8s.io/name=kindnet-012957 minikube.k8s.io/updated_at=2022_10_25T01_51_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:14.446281    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:14.495661    9496 ops.go:34] apiserver oom_adj: -16
	I1025 01:51:14.818944    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:15.525051    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:16.032618    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:16.534985    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:17.019792    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:17.523212    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:18.030274    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:18.529673    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:19.029414    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:19.525727    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:20.015552    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:20.521288    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:21.021682    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:21.527119    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:22.014253    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:22.519766    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:23.020825    9496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 01:51:23.781614    9496 kubeadm.go:1067] duration metric: took 9.3553714s to wait for elevateKubeSystemPrivileges.
	I1025 01:51:23.781614    9496 kubeadm.go:398] StartCluster complete in 37.2648239s
	I1025 01:51:23.781614    9496 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 01:51:23.781614    9496 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 01:51:23.786158    9496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1025 01:51:23.979056    9496 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1025 01:51:25.002141    9496 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-012957" rescaled to 1
	I1025 01:51:25.002333    9496 start.go:212] Will wait 5m0s for node &{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 01:51:25.002385    9496 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1025 01:51:25.007494    9496 out.go:177] * Verifying Kubernetes components...
	I1025 01:51:25.002457    9496 addons.go:65] Setting storage-provisioner=true in profile "kindnet-012957"
	I1025 01:51:25.002457    9496 addons.go:65] Setting default-storageclass=true in profile "kindnet-012957"
	I1025 01:51:25.002333    9496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 01:51:25.003341    9496 config.go:180] Loaded profile config "kindnet-012957": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:51:25.012956    9496 addons.go:153] Setting addon storage-provisioner=true in "kindnet-012957"
	W1025 01:51:25.013101    9496 addons.go:162] addon storage-provisioner should already be in state true
	I1025 01:51:25.013101    9496 host.go:66] Checking if "kindnet-012957" exists ...
	I1025 01:51:25.013101    9496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-012957"
	I1025 01:51:25.028985    9496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:51:25.032990    9496 cli_runner.go:164] Run: docker container inspect kindnet-012957 --format={{.State.Status}}
	I1025 01:51:25.033993    9496 cli_runner.go:164] Run: docker container inspect kindnet-012957 --format={{.State.Status}}
	I1025 01:51:25.250944    9496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 01:51:25.253705    9496 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:51:25.253705    9496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 01:51:25.262857    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:51:25.280859    9496 addons.go:153] Setting addon default-storageclass=true in "kindnet-012957"
	W1025 01:51:25.280859    9496 addons.go:162] addon default-storageclass should already be in state true
	I1025 01:51:25.280859    9496 host.go:66] Checking if "kindnet-012957" exists ...
	I1025 01:51:25.297856    9496 cli_runner.go:164] Run: docker container inspect kindnet-012957 --format={{.State.Status}}
	I1025 01:51:25.498629    9496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50691 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa Username:docker}
	I1025 01:51:25.528416    9496 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 01:51:25.528500    9496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 01:51:25.535743    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:51:25.585848    9496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 01:51:25.602067    9496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-012957
	I1025 01:51:25.770096    9496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50691 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-012957\id_rsa Username:docker}
	I1025 01:51:25.819072    9496 node_ready.go:35] waiting up to 5m0s for node "kindnet-012957" to be "Ready" ...
	I1025 01:51:25.919358    9496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 01:51:26.405595    9496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 01:51:27.480577    9496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.8947158s)
	I1025 01:51:27.480814    9496 start.go:826] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1025 01:51:27.985086    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:28.100674    9496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.1813005s)
	I1025 01:51:28.100674    9496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.6950675s)
	I1025 01:51:28.111884    9496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 01:51:28.114695    9496 addons.go:414] enableAddons completed in 3.1122887s
	I1025 01:51:30.480588    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:32.921300    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:35.416238    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:37.426815    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:39.429569    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:41.921582    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:44.423490    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:46.923725    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:48.927003    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:51.415130    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:53.423612    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:55.927409    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:51:58.416413    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:00.422462    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:02.920712    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:04.927773    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:07.422565    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:09.920642    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:11.923931    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:14.414581    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:16.915635    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:19.427374    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:21.916220    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:23.919938    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:25.922858    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:28.417157    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:30.925987    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:33.441966    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:35.924581    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:38.416122    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:40.421444    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:42.915532    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:44.930093    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:47.413315    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:49.421730    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:51.982070    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:54.413507    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:56.414878    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:52:58.428219    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:00.915168    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:03.419388    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:05.926360    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:07.930425    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:09.930959    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:12.419037    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:14.420113    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:16.917336    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:19.423934    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:21.912578    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:23.915668    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:26.417465    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:28.420423    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:30.422489    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:32.425240    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:34.425946    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:36.921948    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:39.422181    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:41.923118    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:43.928882    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:46.428523    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:48.926077    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:51.420191    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:53.912486    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:55.927338    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:53:58.425678    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:00.918035    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:02.919212    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:04.920833    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:07.411434    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:09.420315    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:11.920476    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:14.415890    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:16.918732    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:18.930012    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:21.422384    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:23.426592    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:25.432977    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:27.926867    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:29.927030    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:32.416749    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:34.427675    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:36.445744    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:38.918431    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:43.034157    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:45.413694    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:47.430289    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:49.924841    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:52.416018    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:54.980289    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:57.425864    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:54:59.910916    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:01.922124    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:04.424430    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:06.426660    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:08.913479    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:10.916599    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:13.417463    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:15.961519    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:18.417033    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:20.423889    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:22.427777    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:24.916874    9496 node_ready.go:58] node "kindnet-012957" has status "Ready":"False"
	I1025 01:55:25.929998    9496 node_ready.go:38] duration metric: took 4m0.1092417s waiting for node "kindnet-012957" to be "Ready" ...
	I1025 01:55:25.932967    9496 out.go:177] 
	W1025 01:55:25.934978    9496 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1025 01:55:25.934978    9496 out.go:239] * 
	* 
	W1025 01:55:25.936965    9496 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 01:55:25.938980    9496 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (330.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (371.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5434552s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6533125s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default
E1025 01:56:19.993739    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
E1025 01:56:26.562769    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6041752s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5463222s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6198906s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6162988s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5515721s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 01:58:03.389365    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default
E1025 01:58:23.703034    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5272634s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 01:58:29.576974    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:58:36.041258    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5617695s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 01:59:11.605305    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.536182s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5225515s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-012957 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.576183s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (371.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (62.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.506196s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1025 01:59:26.608357    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5908279s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.4886263s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.8714252s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5195291s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5324985s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.4729882s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (62.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (372.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4968779s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:01:26.569194    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4779586s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.445689s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5490786s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6412989s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:02:49.791857    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5070036s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.458011s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:03:23.713526    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:03:36.048031    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.466217s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:04:01.370189    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:04:03.930566    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4932926s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:04:19.306071    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6306464s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:04:52.741994    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:05:32.978290    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5160047s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:06:03.717702    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5694667s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (372.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (355.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5598048s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4195442s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4811787s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:03:03.386776    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.459088s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4846482s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:03:29.575796    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5301435s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:03:58.735702    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:03:58.751320    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:03:58.767505    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:03:58.798611    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:03:58.846665    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:03:58.939445    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:03:59.112416    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:03:59.442593    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:04:00.083873    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5140176s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:04:09.057711    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:04:11.604796    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 02:04:15.702490    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5068949s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:04:39.797700    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:04:46.894502    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5268631s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default
E1025 02:05:20.766555    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:05:22.641315    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:22.657150    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:22.672466    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:22.703919    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:22.751332    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:22.844289    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:23.019670    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:23.346078    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:23.992084    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:25.283008    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.
E1025 02:05:27.843418    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4670107s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:05:35.970881    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 02:05:38.877107    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
E1025 02:05:43.231577    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4734602s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 02:06:26.560450    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 02:06:42.688010    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-012955\client.crt: The system cannot find the path specified.
E1025 02:06:44.692009    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-012957\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-012955 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (16.6614836s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (355.49s)

                                                
                                    

Test pass (229/265)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.59
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 1.02
10 TestDownloadOnly/v1.25.3/json-events 9.66
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.51
16 TestDownloadOnly/DeleteAll 2.53
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.63
18 TestDownloadOnlyKic 35.45
19 TestBinaryMirror 4.38
20 TestOffline 195.73
22 TestAddons/Setup 360.51
26 TestAddons/parallel/MetricsServer 7.84
27 TestAddons/parallel/HelmTiller 49.71
29 TestAddons/parallel/CSI 91.04
30 TestAddons/parallel/Headlamp 25.4
32 TestAddons/serial/GCPAuth 21.88
33 TestAddons/StoppedEnableDisable 14.2
34 TestCertOptions 107.25
35 TestCertExpiration 328.92
36 TestDockerFlags 122.18
37 TestForceSystemdFlag 110.06
38 TestForceSystemdEnv 119.7
43 TestErrorSpam/setup 84.18
44 TestErrorSpam/start 5.55
45 TestErrorSpam/status 6.1
46 TestErrorSpam/pause 4.94
47 TestErrorSpam/unpause 5.67
48 TestErrorSpam/stop 21.91
51 TestFunctional/serial/CopySyncFile 0.02
52 TestFunctional/serial/StartWithProxy 98.2
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 61.95
55 TestFunctional/serial/KubeContext 0.17
56 TestFunctional/serial/KubectlGetPods 0.31
59 TestFunctional/serial/CacheCmd/cache/add_remote 7.66
60 TestFunctional/serial/CacheCmd/cache/add_local 4.23
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.36
62 TestFunctional/serial/CacheCmd/cache/list 0.35
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.42
64 TestFunctional/serial/CacheCmd/cache/cache_reload 6.76
65 TestFunctional/serial/CacheCmd/cache/delete 0.76
66 TestFunctional/serial/MinikubeKubectlCmd 0.61
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.71
68 TestFunctional/serial/ExtraConfig 73.08
69 TestFunctional/serial/ComponentHealth 0.24
70 TestFunctional/serial/LogsCmd 3.32
71 TestFunctional/serial/LogsFileCmd 3.53
73 TestFunctional/parallel/ConfigCmd 2.54
75 TestFunctional/parallel/DryRun 3.6
76 TestFunctional/parallel/InternationalLanguage 1.35
77 TestFunctional/parallel/StatusCmd 5.91
82 TestFunctional/parallel/AddonsCmd 1.01
83 TestFunctional/parallel/PersistentVolumeClaim 59.48
85 TestFunctional/parallel/SSHCmd 3.32
86 TestFunctional/parallel/CpCmd 6.55
87 TestFunctional/parallel/MySQL 89.16
88 TestFunctional/parallel/FileSync 1.55
89 TestFunctional/parallel/CertSync 9.25
93 TestFunctional/parallel/NodeLabels 0.38
95 TestFunctional/parallel/NonActiveRuntimeDisabled 1.65
99 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
101 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.81
102 TestFunctional/parallel/Version/short 0.49
103 TestFunctional/parallel/Version/components 5.7
104 TestFunctional/parallel/ImageCommands/ImageListShort 1.36
105 TestFunctional/parallel/ImageCommands/ImageListTable 1.38
106 TestFunctional/parallel/ImageCommands/ImageListJson 1.32
107 TestFunctional/parallel/ImageCommands/ImageListYaml 1.58
108 TestFunctional/parallel/ImageCommands/ImageBuild 15.99
109 TestFunctional/parallel/ImageCommands/Setup 3.29
110 TestFunctional/parallel/ProfileCmd/profile_not_create 2.94
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 16.16
112 TestFunctional/parallel/ProfileCmd/profile_list 2.04
113 TestFunctional/parallel/ProfileCmd/profile_json_output 2.05
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.33
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 8.02
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 20.24
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.61
123 TestFunctional/parallel/ImageCommands/ImageRemove 2.33
124 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.4
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.67
126 TestFunctional/parallel/DockerEnv/powershell 7.15
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.83
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.9
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.84
130 TestFunctional/delete_addon-resizer_images 0.01
131 TestFunctional/delete_my-image_image 0.01
132 TestFunctional/delete_minikube_cached_images 0.01
135 TestIngressAddonLegacy/StartLegacyK8sCluster 106.02
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 44.07
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 1.98
142 TestJSONOutput/start/Command 100.48
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 1.96
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 1.94
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 13.66
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 1.75
167 TestKicCustomNetwork/create_custom_network 86.88
168 TestKicCustomNetwork/use_default_bridge_network 85.24
169 TestKicExistingNetwork 86.21
170 TestKicCustomSubnet 88.73
171 TestMainNoArgs 0.36
172 TestMinikubeProfile 194.23
175 TestMountStart/serial/StartWithMountFirst 21.99
176 TestMountStart/serial/VerifyMountFirst 1.31
177 TestMountStart/serial/StartWithMountSecond 19.1
178 TestMountStart/serial/VerifyMountSecond 1.35
179 TestMountStart/serial/DeleteFirst 4.51
180 TestMountStart/serial/VerifyMountPostDelete 1.28
181 TestMountStart/serial/Stop 2.84
182 TestMountStart/serial/RestartStopped 13.84
183 TestMountStart/serial/VerifyMountPostStop 1.36
186 TestMultiNode/serial/FreshStart2Nodes 189.62
187 TestMultiNode/serial/DeployApp2Nodes 12.55
188 TestMultiNode/serial/PingHostFrom2Pods 3.65
189 TestMultiNode/serial/AddNode 60.58
190 TestMultiNode/serial/ProfileList 1.48
191 TestMultiNode/serial/CopyFile 48.8
192 TestMultiNode/serial/StopNode 8.06
193 TestMultiNode/serial/StartAfterStop 34.45
194 TestMultiNode/serial/RestartKeepsNodes 142.86
195 TestMultiNode/serial/DeleteNode 14.97
196 TestMultiNode/serial/StopMultiNode 26.76
197 TestMultiNode/serial/RestartMultiNode 112.68
198 TestMultiNode/serial/ValidateNameConflict 87.51
202 TestPreload 259.37
203 TestScheduledStopWindows 155.49
207 TestInsufficientStorage 52.79
208 TestRunningBinaryUpgrade 223.64
210 TestKubernetesUpgrade 345.3
211 TestMissingContainerUpgrade 271.18
213 TestStoppedBinaryUpgrade/Setup 0.66
215 TestNoKubernetes/serial/StartNoK8sWithVersion 0.54
223 TestPause/serial/Start 159.42
224 TestNoKubernetes/serial/StartWithK8s 207.43
225 TestStoppedBinaryUpgrade/Upgrade 260.91
226 TestPause/serial/SecondStartNoReconfiguration 55.28
227 TestNoKubernetes/serial/StartWithStopK8s 27.1
229 TestNoKubernetes/serial/Start 22.4
230 TestNoKubernetes/serial/VerifyK8sNotRunning 1.61
231 TestNoKubernetes/serial/ProfileList 9.67
232 TestStoppedBinaryUpgrade/MinikubeLogs 9.98
233 TestNoKubernetes/serial/Stop 4.19
234 TestNoKubernetes/serial/StartNoArgs 18.47
235 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.53
248 TestStartStop/group/old-k8s-version/serial/FirstStart 181.88
250 TestStartStop/group/no-preload/serial/FirstStart 164.32
252 TestStartStop/group/embed-certs/serial/FirstStart 136.57
254 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 102.7
255 TestStartStop/group/embed-certs/serial/DeployApp 12.73
256 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.6
257 TestStartStop/group/embed-certs/serial/Stop 13.92
258 TestStartStop/group/old-k8s-version/serial/DeployApp 11.15
259 TestStartStop/group/no-preload/serial/DeployApp 11.11
260 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.36
261 TestStartStop/group/embed-certs/serial/SecondStart 348.14
262 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.89
263 TestStartStop/group/old-k8s-version/serial/Stop 13.34
264 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.83
265 TestStartStop/group/no-preload/serial/Stop 13.88
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.14
267 TestStartStop/group/old-k8s-version/serial/SecondStart 431.81
268 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.27
269 TestStartStop/group/no-preload/serial/SecondStart 369.34
270 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.27
271 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.29
272 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.79
273 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.23
274 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 361.57
275 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 26.05
276 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.72
277 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 1.73
278 TestStartStop/group/embed-certs/serial/Pause 13.82
279 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 30.42
281 TestStartStop/group/newest-cni/serial/FirstStart 156.35
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.93
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 2.42
284 TestStartStop/group/no-preload/serial/Pause 18.56
285 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 47.09
286 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.07
287 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 17.75
288 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.77
289 TestStartStop/group/old-k8s-version/serial/Pause 12.49
290 TestNetworkPlugins/group/auto/Start 124.9
291 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.9
292 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.93
293 TestStartStop/group/default-k8s-diff-port/serial/Pause 24.12
296 TestStartStop/group/newest-cni/serial/DeployApp 0
297 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.23
298 TestStartStop/group/newest-cni/serial/Stop 5.5
299 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.31
300 TestStartStop/group/newest-cni/serial/SecondStart 50.63
301 TestNetworkPlugins/group/auto/KubeletFlags 1.58
302 TestNetworkPlugins/group/auto/NetCatPod 22.77
303 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
304 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 2.48
306 TestNetworkPlugins/group/auto/DNS 0.66
307 TestNetworkPlugins/group/auto/Localhost 0.67
308 TestNetworkPlugins/group/auto/HairPin 5.79
310 TestNetworkPlugins/group/false/Start 362.1
312 TestNetworkPlugins/group/enable-default-cni/Start 362.5
313 TestNetworkPlugins/group/false/KubeletFlags 1.51
314 TestNetworkPlugins/group/false/NetCatPod 21.84
315 TestNetworkPlugins/group/bridge/Start 354.77
317 TestNetworkPlugins/group/kubenet/Start 97.73
318 TestNetworkPlugins/group/kubenet/KubeletFlags 1.41
319 TestNetworkPlugins/group/kubenet/NetCatPod 20.02
320 TestNetworkPlugins/group/kubenet/DNS 0.52
321 TestNetworkPlugins/group/kubenet/Localhost 0.46
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.39
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 20.79
326 TestNetworkPlugins/group/bridge/KubeletFlags 1.37
327 TestNetworkPlugins/group/bridge/NetCatPod 25.75
x
+
TestDownloadOnly/v1.16.0/json-events (9.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-235704 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-235704 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (9.5846558s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (1.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-235704
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-235704: exit status 85 (1.0154876s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-235704 | minikube8\jenkins | v1.27.1 | 24 Oct 22 23:57 GMT |          |
	|         | -p download-only-235704        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/24 23:57:04
	Running on machine: minikube8
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 23:57:04.782992    9024 out.go:296] Setting OutFile to fd 608 ...
	I1024 23:57:04.843164    9024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 23:57:04.843164    9024 out.go:309] Setting ErrFile to fd 612...
	I1024 23:57:04.843164    9024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 23:57:04.855157    9024 root.go:311] Error reading config file at C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1024 23:57:04.865162    9024 out.go:303] Setting JSON to true
	I1024 23:57:04.868603    9024 start.go:116] hostinfo: {"hostname":"minikube8","uptime":5469,"bootTime":1666650355,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1024 23:57:04.868603    9024 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1024 23:57:04.914046    9024 out.go:97] [download-only-235704] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1024 23:57:04.914378    9024 notify.go:220] Checking for updates...
	W1024 23:57:04.914378    9024 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1024 23:57:04.917128    9024 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1024 23:57:04.919552    9024 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1024 23:57:04.922474    9024 out.go:169] MINIKUBE_LOCATION=14956
	I1024 23:57:04.925223    9024 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1024 23:57:04.932572    9024 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 23:57:04.933569    9024 driver.go:362] Setting default libvirt URI to qemu:///system
	I1024 23:57:05.211125    9024 docker.go:137] docker version: linux-20.10.17
	I1024 23:57:05.219889    9024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 23:57:05.761336    9024 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-10-24 23:57:05.3863878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1024 23:57:05.766157    9024 out.go:97] Using the docker driver based on user configuration
	I1024 23:57:05.766157    9024 start.go:282] selected driver: docker
	I1024 23:57:05.766157    9024 start.go:808] validating driver "docker" against <nil>
	I1024 23:57:05.783076    9024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 23:57:06.349031    9024 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-10-24 23:57:05.9526673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1024 23:57:06.349031    9024 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1024 23:57:06.465664    9024 start_flags.go:384] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I1024 23:57:06.466268    9024 start_flags.go:867] Wait components to verify : map[apiserver:true system_pods:true]
	I1024 23:57:06.468822    9024 out.go:169] Using Docker Desktop driver with root privileges
	I1024 23:57:06.471173    9024 cni.go:95] Creating CNI manager for ""
	I1024 23:57:06.471173    9024 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1024 23:57:06.471173    9024 start_flags.go:317] config:
	{Name:download-only-235704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-235704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1024 23:57:06.474213    9024 out.go:97] Starting control plane node download-only-235704 in cluster download-only-235704
	I1024 23:57:06.474213    9024 cache.go:120] Beginning downloading kic base image for docker with docker
	I1024 23:57:06.476889    9024 out.go:97] Pulling base image ...
	I1024 23:57:06.476889    9024 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1024 23:57:06.476889    9024 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1024 23:57:06.520061    9024 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1024 23:57:06.520061    9024 cache.go:57] Caching tarball of preloaded images
	I1024 23:57:06.520878    9024 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1024 23:57:06.524931    9024 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1024 23:57:06.524931    9024 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1024 23:57:06.594354    9024 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1024 23:57:06.660115    9024 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 to local cache
	I1024 23:57:06.660115    9024 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.35-1665430468-15094@sha256_2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar
	I1024 23:57:06.660115    9024 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.35-1665430468-15094@sha256_2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar
	I1024 23:57:06.660115    9024 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local cache directory
	I1024 23:57:06.661104    9024 image.go:126] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 to local cache
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-235704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (1.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-235704 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-235704 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker: (9.6608923s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-235704
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-235704: exit status 85 (505.0337ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-235704 | minikube8\jenkins | v1.27.1 | 24 Oct 22 23:57 GMT |          |
	|         | -p download-only-235704        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-235704 | minikube8\jenkins | v1.27.1 | 24 Oct 22 23:57 GMT |          |
	|         | -p download-only-235704        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/24 23:57:15
	Running on machine: minikube8
	Binary: Built with gc go1.19.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 23:57:15.387107    9508 out.go:296] Setting OutFile to fd 728 ...
	I1024 23:57:15.445009    9508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 23:57:15.445009    9508 out.go:309] Setting ErrFile to fd 732...
	I1024 23:57:15.445009    9508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 23:57:15.457331    9508 root.go:311] Error reading config file at C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I1024 23:57:15.465444    9508 out.go:303] Setting JSON to true
	I1024 23:57:15.470329    9508 start.go:116] hostinfo: {"hostname":"minikube8","uptime":5479,"bootTime":1666650356,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1024 23:57:15.470635    9508 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1024 23:57:15.475394    9508 out.go:97] [download-only-235704] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1024 23:57:15.475540    9508 notify.go:220] Checking for updates...
	I1024 23:57:15.477964    9508 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1024 23:57:15.479687    9508 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1024 23:57:15.482708    9508 out.go:169] MINIKUBE_LOCATION=14956
	I1024 23:57:15.485074    9508 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1024 23:57:15.489156    9508 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 23:57:15.490011    9508 config.go:180] Loaded profile config "download-only-235704": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1024 23:57:15.490657    9508 start.go:716] api.Load failed for download-only-235704: filestore "download-only-235704": Docker machine "download-only-235704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 23:57:15.490909    9508 driver.go:362] Setting default libvirt URI to qemu:///system
	W1024 23:57:15.491180    9508 start.go:716] api.Load failed for download-only-235704: filestore "download-only-235704": Docker machine "download-only-235704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 23:57:15.770408    9508 docker.go:137] docker version: linux-20.10.17
	I1024 23:57:15.778714    9508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 23:57:16.331370    9508 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-10-24 23:57:15.9516313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1024 23:57:16.612949    9508 out.go:97] Using the docker driver based on existing profile
	I1024 23:57:16.612949    9508 start.go:282] selected driver: docker
	I1024 23:57:16.612949    9508 start.go:808] validating driver "docker" against &{Name:download-only-235704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-235704 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1024 23:57:16.628922    9508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 23:57:17.145418    9508 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-10-24 23:57:16.7802046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1024 23:57:17.188340    9508 cni.go:95] Creating CNI manager for ""
	I1024 23:57:17.188340    9508 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1024 23:57:17.188423    9508 start_flags.go:317] config:
	{Name:download-only-235704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-235704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run
/socket_vmnet}
	I1024 23:57:17.466150    9508 out.go:97] Starting control plane node download-only-235704 in cluster download-only-235704
	I1024 23:57:17.466150    9508 cache.go:120] Beginning downloading kic base image for docker with docker
	I1024 23:57:17.469144    9508 out.go:97] Pulling base image ...
	I1024 23:57:17.469254    9508 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1024 23:57:17.469552    9508 image.go:82] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local docker daemon
	I1024 23:57:17.526707    9508 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1024 23:57:17.526928    9508 cache.go:57] Caching tarball of preloaded images
	I1024 23:57:17.527463    9508 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1024 23:57:17.529687    9508 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1024 23:57:17.529687    9508 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1024 23:57:17.597283    9508 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1024 23:57:17.673316    9508 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 to local cache
	I1024 23:57:17.673316    9508 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.35-1665430468-15094@sha256_2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar
	I1024 23:57:17.673316    9508 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.35-1665430468-15094@sha256_2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191.tar
	I1024 23:57:17.673316    9508 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local cache directory
	I1024 23:57:17.673987    9508 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 in local cache directory, skipping pull
	I1024 23:57:17.674040    9508 image.go:110] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 exists in cache, skipping pull
	I1024 23:57:17.674040    9508 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-235704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.51s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (2.53s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.5269645s)
--- PASS: TestDownloadOnly/DeleteAll (2.53s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.63s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-235704
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-235704: (1.6275153s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.63s)

                                                
                                    
x
+
TestDownloadOnlyKic (35.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-235731 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-235731 --force --alsologtostderr --driver=docker: (32.4190567s)
helpers_test.go:175: Cleaning up "download-docker-235731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-235731
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-235731: (1.9144308s)
--- PASS: TestDownloadOnlyKic (35.45s)

                                                
                                    
x
+
TestBinaryMirror (4.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-235806 --alsologtostderr --binary-mirror http://127.0.0.1:61795 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-235806 --alsologtostderr --binary-mirror http://127.0.0.1:61795 --driver=docker: (2.4406715s)
helpers_test.go:175: Cleaning up "binary-mirror-235806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-235806
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-235806: (1.7109794s)
--- PASS: TestBinaryMirror (4.38s)

                                                
                                    
x
+
TestOffline (195.73s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-012456 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-012456 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m2.6238072s)
helpers_test.go:175: Cleaning up "offline-docker-012456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-012456

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-012456: (13.1088492s)
--- PASS: TestOffline (195.73s)

                                                
                                    
x
+
TestAddons/Setup (360.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-235811 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-235811 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m0.5083733s)
--- PASS: TestAddons/Setup (360.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 26.0196ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-769cd898cd-5lt9q" [9c0e1d44-194a-4ece-9569-498a8a562231] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.1768025s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:367: (dbg) Run:  kubectl --context addons-235811 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-235811 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Done: out/minikube-windows-amd64.exe -p addons-235811 addons disable metrics-server --alsologtostderr -v=1: (2.2012559s)
--- PASS: TestAddons/parallel/MetricsServer (7.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (49.71s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 10.3686ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-nd97g" [f7b50a7d-4cff-4075-9c31-6cbce7059846] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.1408533s
addons_test.go:425: (dbg) Run:  kubectl --context addons-235811 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-235811 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (41.4458904s)
addons_test.go:442: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-235811 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:442: (dbg) Done: out/minikube-windows-amd64.exe -p addons-235811 addons disable helm-tiller --alsologtostderr -v=1: (3.094257s)
--- PASS: TestAddons/parallel/HelmTiller (49.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (91.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 30.0158ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-235811 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:516: (dbg) Done: kubectl --context addons-235811 create -f testdata\csi-hostpath-driver\pvc.yaml: (2.9954178s)
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-235811 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-235811 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-235811 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [1ee0f55d-ef3a-4a6c-97e4-5fa6a9982692] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [1ee0f55d-ef3a-4a6c-97e4-5fa6a9982692] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [1ee0f55d-ef3a-4a6c-97e4-5fa6a9982692] Running
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 44.0903556s
addons_test.go:536: (dbg) Run:  kubectl --context addons-235811 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-235811 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:417: (dbg) Run:  kubectl --context addons-235811 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-235811 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:546: (dbg) Done: kubectl --context addons-235811 delete pod task-pv-pod: (3.6857123s)
addons_test.go:552: (dbg) Run:  kubectl --context addons-235811 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-235811 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-235811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-235811 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-235811 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [249e4fc6-4ec6-40cb-b122-74f1f7477910] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [249e4fc6-4ec6-40cb-b122-74f1f7477910] Running
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.1458978s
addons_test.go:578: (dbg) Run:  kubectl --context addons-235811 delete pod task-pv-pod-restore
addons_test.go:578: (dbg) Done: kubectl --context addons-235811 delete pod task-pv-pod-restore: (1.9202254s)
addons_test.go:582: (dbg) Run:  kubectl --context addons-235811 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-235811 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-235811 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-windows-amd64.exe -p addons-235811 addons disable csi-hostpath-driver --alsologtostderr -v=1: (10.7970217s)
addons_test.go:594: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-235811 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:594: (dbg) Done: out/minikube-windows-amd64.exe -p addons-235811 addons disable volumesnapshots --alsologtostderr -v=1: (2.6136543s)
--- PASS: TestAddons/parallel/CSI (91.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-235811 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-235811 --alsologtostderr -v=1: (4.2523597s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-tnrs7" [2572d323-8b8c-4967-b1b6-266ebcb37dca] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-tnrs7" [2572d323-8b8c-4967-b1b6-266ebcb37dca] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-tnrs7" [2572d323-8b8c-4967-b1b6-266ebcb37dca] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.1421065s
--- PASS: TestAddons/parallel/Headlamp (25.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (21.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-235811 create -f testdata\busybox.yaml
addons_test.go:605: (dbg) Done: kubectl --context addons-235811 create -f testdata\busybox.yaml: (1.9839887s)
addons_test.go:612: (dbg) Run:  kubectl --context addons-235811 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [82639ca7-3578-4cf2-a500-ef11ee7e2a23] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [82639ca7-3578-4cf2-a500-ef11ee7e2a23] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.0389902s
addons_test.go:624: (dbg) Run:  kubectl --context addons-235811 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-235811 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-235811 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-235811 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-235811 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-windows-amd64.exe -p addons-235811 addons disable gcp-auth --alsologtostderr -v=1: (8.6797013s)
--- PASS: TestAddons/serial/GCPAuth (21.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-235811
addons_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-235811: (13.0939429s)
addons_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-235811
addons_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-235811
--- PASS: TestAddons/StoppedEnableDisable (14.20s)

                                                
                                    
x
+
TestCertOptions (107.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-013357 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E1025 01:34:11.597334    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-013357 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m34.8975741s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-013357 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-013357 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.5237004s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-013357 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-013357 -- "sudo cat /etc/kubernetes/admin.conf": (1.8792617s)
helpers_test.go:175: Cleaning up "cert-options-013357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-013357

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-013357: (8.7159914s)
--- PASS: TestCertOptions (107.25s)

                                                
                                    
x
+
TestCertExpiration (328.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-013202 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-013202 --memory=2048 --cert-expiration=3m --driver=docker: (1m27.831983s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-013202 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-013202 --memory=2048 --cert-expiration=8760h --driver=docker: (47.348959s)
helpers_test.go:175: Cleaning up "cert-expiration-013202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-013202
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-013202: (13.7360864s)
--- PASS: TestCertExpiration (328.92s)

                                                
                                    
x
+
TestDockerFlags (122.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-013342 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-013342 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m47.5685021s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-013342 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-013342 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.5207964s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-013342 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-013342 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.5699467s)
helpers_test.go:175: Cleaning up "docker-flags-013342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-013342

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-013342: (11.5147792s)
--- PASS: TestDockerFlags (122.18s)

                                                
                                    
x
+
TestForceSystemdFlag (110.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-012812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-012812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m39.140332s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-012812 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-012812 ssh "docker info --format {{.CgroupDriver}}": (1.7586906s)
helpers_test.go:175: Cleaning up "force-systemd-flag-012812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-012812

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-012812: (9.1654638s)
--- PASS: TestForceSystemdFlag (110.06s)

                                                
                                    
x
+
TestForceSystemdEnv (119.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-013002 --memory=2048 --alsologtostderr -v=5 --driver=docker
E1025 01:31:26.549833    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
docker_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-013002 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m45.9166977s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-013002 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-013002 ssh "docker info --format {{.CgroupDriver}}": (2.0287986s)
helpers_test.go:175: Cleaning up "force-systemd-env-013002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-013002

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-013002: (11.75223s)
--- PASS: TestForceSystemdEnv (119.70s)

                                                
                                    
x
+
TestErrorSpam/setup (84.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-000625 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-000625 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 --driver=docker: (1m24.1744078s)
error_spam_test.go:91: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.25.3."
--- PASS: TestErrorSpam/setup (84.18s)

                                                
                                    
x
+
TestErrorSpam/start (5.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 start --dry-run: (1.8285024s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 start --dry-run: (1.8833218s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 start --dry-run: (1.8378417s)
--- PASS: TestErrorSpam/start (5.55s)

                                                
                                    
x
+
TestErrorSpam/status (6.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 status: (1.8589084s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 status: (2.645711s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 status: (1.5887961s)
--- PASS: TestErrorSpam/status (6.10s)

                                                
                                    
x
+
TestErrorSpam/pause (4.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 pause: (2.0661399s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 pause: (1.3777769s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 pause: (1.4974418s)
--- PASS: TestErrorSpam/pause (4.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 unpause: (1.9641357s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 unpause: (2.1148264s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 unpause: (1.589353s)
--- PASS: TestErrorSpam/unpause (5.67s)

                                                
                                    
x
+
TestErrorSpam/stop (21.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 stop: (13.1565829s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 stop: (4.3667315s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-000625 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-000625 stop: (4.3843084s)
--- PASS: TestErrorSpam/stop (21.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\test\nested\copy\4200\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.02s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-000838 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E1025 00:09:11.552116    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:11.581866    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:11.597191    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:11.628432    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:11.676653    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:11.769989    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:11.941822    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:12.268501    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:12.923738    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:14.216685    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:17.862610    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:22.991378    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:33.246365    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:09:53.740859    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
functional_test.go:2161: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-000838 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m38.1972291s)
--- PASS: TestFunctional/serial/StartWithProxy (98.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (61.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-000838 --alsologtostderr -v=8
E1025 00:10:34.706962    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
functional_test.go:652: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-000838 --alsologtostderr -v=8: (1m1.9446972s)
functional_test.go:656: soft start took 1m1.945883s for "functional-000838" cluster.
--- PASS: TestFunctional/serial/SoftStart (61.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.17s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-000838 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 cache add k8s.gcr.io/pause:3.1: (2.4244758s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 cache add k8s.gcr.io/pause:3.3: (2.4591087s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 cache add k8s.gcr.io/pause:latest: (2.7782709s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-000838 C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3828214182\001
functional_test.go:1070: (dbg) Done: docker build -t minikube-local-cache-test:functional-000838 C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3828214182\001: (1.4739731s)
functional_test.go:1082: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cache add minikube-local-cache-test:functional-000838
functional_test.go:1082: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 cache add minikube-local-cache-test:functional-000838: (2.1600802s)
functional_test.go:1087: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cache delete minikube-local-cache-test:functional-000838
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-000838
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh sudo crictl images
functional_test.go:1117: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh sudo crictl images: (1.4197569s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (6.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1140: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh sudo docker rmi k8s.gcr.io/pause:latest: (1.6521434s)
functional_test.go:1146: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-000838 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (1.4345783s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 cache reload: (2.3205361s)
functional_test.go:1156: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1156: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (1.3543183s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (6.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.76s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 kubectl -- --context functional-000838 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out\kubectl.exe --context functional-000838 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (73.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-000838 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 00:11:56.639230    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
functional_test.go:750: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-000838 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m13.0778674s)
functional_test.go:754: restart took 1m13.0782836s for "functional-000838" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (73.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-000838 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 logs
functional_test.go:1229: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 logs: (3.3192133s)
--- PASS: TestFunctional/serial/LogsCmd (3.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 logs --file C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1977537857\001\logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 logs --file C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1977537857\001\logs.txt: (3.5266117s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-000838 config get cpus: exit status 14 (408.0713ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-000838 config get cpus: exit status 14 (377.6449ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-000838 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-000838 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3608576s)

                                                
                                                
-- stdout --
	* [functional-000838] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 00:14:18.114533    9264 out.go:296] Setting OutFile to fd 1004 ...
	I1025 00:14:18.180024    9264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 00:14:18.180024    9264 out.go:309] Setting ErrFile to fd 1008...
	I1025 00:14:18.180024    9264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 00:14:18.202126    9264 out.go:303] Setting JSON to false
	I1025 00:14:18.205138    9264 start.go:116] hostinfo: {"hostname":"minikube8","uptime":6502,"bootTime":1666650356,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 00:14:18.205138    9264 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 00:14:18.209162    9264 out.go:177] * [functional-000838] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 00:14:18.213134    9264 notify.go:220] Checking for updates...
	I1025 00:14:18.215138    9264 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 00:14:18.218147    9264 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 00:14:18.221143    9264 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 00:14:18.225123    9264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 00:14:18.230127    9264 config.go:180] Loaded profile config "functional-000838": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 00:14:18.232132    9264 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 00:14:18.533696    9264 docker.go:137] docker version: linux-20.10.17
	I1025 00:14:18.551234    9264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 00:14:19.104353    9264 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-10-25 00:14:18.7071021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 00:14:19.109372    9264 out.go:177] * Using the docker driver based on existing profile
	I1025 00:14:19.116375    9264 start.go:282] selected driver: docker
	I1025 00:14:19.116375    9264 start.go:808] validating driver "docker" against &{Name:functional-000838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-000838 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 00:14:19.117366    9264 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 00:14:19.170391    9264 out.go:177] 
	W1025 00:14:19.173841    9264 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 00:14:19.175919    9264 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-000838 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:984: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-000838 --dry-run --alsologtostderr -v=1 --driver=docker: (2.2422399s)
--- PASS: TestFunctional/parallel/DryRun (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-000838 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-000838 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3470432s)

                                                
                                                
-- stdout --
	* [functional-000838] minikube v1.27.1 sur Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 00:14:16.766674    7080 out.go:296] Setting OutFile to fd 792 ...
	I1025 00:14:16.830831    7080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 00:14:16.830831    7080 out.go:309] Setting ErrFile to fd 796...
	I1025 00:14:16.830831    7080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 00:14:16.852834    7080 out.go:303] Setting JSON to false
	I1025 00:14:16.856826    7080 start.go:116] hostinfo: {"hostname":"minikube8","uptime":6501,"bootTime":1666650355,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W1025 00:14:16.857846    7080 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1025 00:14:16.860821    7080 out.go:177] * [functional-000838] minikube v1.27.1 sur Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I1025 00:14:16.867816    7080 notify.go:220] Checking for updates...
	I1025 00:14:16.870817    7080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I1025 00:14:16.874838    7080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I1025 00:14:16.876835    7080 out.go:177]   - MINIKUBE_LOCATION=14956
	I1025 00:14:16.879823    7080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 00:14:16.882824    7080 config.go:180] Loaded profile config "functional-000838": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 00:14:16.883822    7080 driver.go:362] Setting default libvirt URI to qemu:///system
	I1025 00:14:17.164670    7080 docker.go:137] docker version: linux-20.10.17
	I1025 00:14:17.172660    7080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 00:14:17.731845    7080 info.go:265] docker info: {ID:M4FO:JDXL:6ZVH:VFMH:S5Y2:SPI6:M5KN:IXAP:7HY6:BFJP:RVUI:7RHR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:52 SystemTime:2022-10-25 00:14:17.3418204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.9] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-
plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.19.0]] Warnings:<nil>}}
	I1025 00:14:17.736296    7080 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1025 00:14:17.738541    7080 start.go:282] selected driver: docker
	I1025 00:14:17.738582    7080 start.go:808] validating driver "docker" against &{Name:functional-000838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1665430468-15094@sha256:2c137487f3327e6653ff519ec7fd599d25c0275ae67f44e4a71485aabe1e7191 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-000838 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1025 00:14:17.738837    7080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 00:14:17.804872    7080 out.go:177] 
	W1025 00:14:17.806877    7080 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 00:14:17.810876    7080 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 status: (1.5608533s)
functional_test.go:853: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.8788983s)
functional_test.go:865: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 status -o json: (2.4717414s)
--- PASS: TestFunctional/parallel/StatusCmd (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (59.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [040a5736-73bd-4e92-b3a2-9388f210bbb9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.1003132s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-000838 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-000838 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-000838 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000838 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [bfe72ebb-3731-474d-84b6-94b684b4df81] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [bfe72ebb-3731-474d-84b6-94b684b4df81] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [bfe72ebb-3731-474d-84b6-94b684b4df81] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.0941806s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-000838 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:100: (dbg) Done: kubectl --context functional-000838 exec sp-pod -- touch /tmp/mount/foo: (1.2969616s)
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-000838 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-000838 delete -f testdata/storage-provisioner/pod.yaml: (5.9930177s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000838 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b] Pending
helpers_test.go:342: "sp-pod" [5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5ff00d85-4bc2-4a5e-a1f1-70f2ca0e059b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.1772459s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-000838 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (59.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "echo hello": (1.7720474s)
functional_test.go:1672: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "cat /etc/hostname": (1.5502821s)
--- PASS: TestFunctional/parallel/SSHCmd (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (6.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.5852605s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh -n functional-000838 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh -n functional-000838 "sudo cat /home/docker/cp-test.txt": (1.502969s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 cp functional-000838:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalparallelCpCmd2245269016\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 cp functional-000838:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestFunctionalparallelCpCmd2245269016\001\cp-test.txt: (1.5346641s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh -n functional-000838 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh -n functional-000838 "sudo cat /home/docker/cp-test.txt": (1.9275202s)
--- PASS: TestFunctional/parallel/CpCmd (6.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (89.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-000838 replace --force -f testdata\mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-zfh68" [e4191e78-ae83-4272-9aa0-dd3c9d287cf5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-zfh68" [e4191e78-ae83-4272-9aa0-dd3c9d287cf5] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m5.0723564s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;": exit status 1 (501.75ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;": exit status 1 (482.8845ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;": exit status 1 (731.719ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;": exit status 1 (609.5947ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;": exit status 1 (623.1042ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;": exit status 1 (465.8709ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-000838 exec mysql-596b7fcdbf-zfh68 -- mysql -ppassword -e "show databases;"
E1025 00:19:11.553189    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (89.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/4200/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/test/nested/copy/4200/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/test/nested/copy/4200/hosts": (1.5511416s)
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (9.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/4200.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/4200.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/4200.pem": (1.3367817s)
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/4200.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /usr/share/ca-certificates/4200.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /usr/share/ca-certificates/4200.pem": (1.4728156s)
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.8325041s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/42002.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/42002.pem"
E1025 00:14:11.555493    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/42002.pem": (1.6251706s)
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/42002.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /usr/share/ca-certificates/42002.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /usr/share/ca-certificates/42002.pem": (1.5533639s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.4299996s)
--- PASS: TestFunctional/parallel/CertSync (9.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-000838 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-000838 ssh "sudo systemctl is-active crio": exit status 1 (1.6526466s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-000838 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-000838 apply -f testdata\testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [72a85493-cdad-4d75-945f-34ed0ae02248] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [72a85493-cdad-4d75-945f-34ed0ae02248] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.141343s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.81s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 version --short
--- PASS: TestFunctional/parallel/Version/short (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 version -o=json --components
functional_test.go:2197: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 version -o=json --components: (5.695675s)
--- PASS: TestFunctional/parallel/Version/components (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls --format short
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls --format short: (1.3567341s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-000838 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-000838
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-000838
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls --format table
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls --format table: (1.3775907s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-000838 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/google-containers/addon-resizer      | functional-000838 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-000838 | 6304176761c2f | 30B    |
| docker.io/library/nginx                     | alpine            | b997307a58ab5 | 23.6MB |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 5d58c024174dd | 142MB  |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/localhost/my-image                | functional-000838 | 8d3e160dd786a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls --format json
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls --format json: (1.3227303s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-000838 image ls --format json:
[{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io
/pause:3.8"],"size":"711000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-000838"],"size":"32900000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"6304176761c2f34b4d9ee302430dbb58bc2cbd4267aa6e0c4b341202024c29dc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-000838"],"size":"30"},{"id":"5d58c024174dd06df1c4d41d8d44b485e3080422374971005270588204ca3b82","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"034
6dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"8d3e160dd786a62b4866d118fc76c2ec2138ac747dd2b17824a3c29db922718b","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-000838"],"size":"1240000"},{"id":"b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23600000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls --format yaml
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls --format yaml: (1.5773045s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-000838 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 6304176761c2f34b4d9ee302430dbb58bc2cbd4267aa6e0c4b341202024c29dc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-000838
size: "30"
- id: 5d58c024174dd06df1c4d41d8d44b485e3080422374971005270588204ca3b82
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-000838
size: "32900000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (15.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-000838 ssh pgrep buildkitd: exit status 1 (1.7902613s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image build -t localhost/my-image:functional-000838 testdata\build
E1025 00:14:40.494548    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
functional_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image build -t localhost/my-image:functional-000838 testdata\build: (12.6237325s)
functional_test.go:316: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-000838 image build -t localhost/my-image:functional-000838 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d0986a7e6203
Removing intermediate container d0986a7e6203
---> 41df7fcc85c9
Step 3/3 : ADD content.txt /
---> 8d3e160dd786
Successfully built 8d3e160dd786
Successfully tagged localhost/my-image:functional-000838
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls: (1.5739961s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (15.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.0091501s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-000838
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.2282752s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image load --daemon gcr.io/google-containers/addon-resizer:functional-000838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image load --daemon gcr.io/google-containers/addon-resizer:functional-000838: (14.6738428s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls: (1.4849361s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.6276002s)
functional_test.go:1311: Took "1.6276002s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "413.8676ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.6483522s)
functional_test.go:1362: Took "1.6484313s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "398.3477ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-000838 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-000838 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 9144: OpenProcess: The parameter is incorrect.
helpers_test.go:506: unable to kill pid 10040: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (8.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image load --daemon gcr.io/google-containers/addon-resizer:functional-000838
functional_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image load --daemon gcr.io/google-containers/addon-resizer:functional-000838: (6.102834s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls: (1.9142869s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (8.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (20.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.52159s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-000838
functional_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image load --daemon gcr.io/google-containers/addon-resizer:functional-000838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image load --daemon gcr.io/google-containers/addon-resizer:functional-000838: (16.1345002s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls: (1.3422858s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (20.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image save gcr.io/google-containers/addon-resizer:functional-000838 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image save gcr.io/google-containers/addon-resizer:functional-000838 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (5.6062703s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image rm gcr.io/google-containers/addon-resizer:functional-000838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image rm gcr.io/google-containers/addon-resizer:functional-000838: (1.2742396s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls: (1.0591333s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (4.3705352s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image ls: (1.0326414s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-000838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 image save --daemon gcr.io/google-containers/addon-resizer:functional-000838

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p functional-000838 image save --daemon gcr.io/google-containers/addon-resizer:functional-000838: (7.2334403s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-000838
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:492: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-000838 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-000838"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:492: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-000838 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-000838": (4.3575755s)
functional_test.go:515: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-000838 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:515: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-000838 docker-env | Invoke-Expression ; docker images": (2.7821068s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-000838 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:188: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-000838
functional_test.go:186: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-000838: context deadline exceeded (0s)
functional_test.go:188: failed to remove image "gcr.io/google-containers/addon-resizer:functional-000838" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-000838": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-000838
functional_test.go:194: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-000838: context deadline exceeded (0s)
functional_test.go:196: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-000838": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-000838
functional_test.go:202: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-000838: context deadline exceeded (0s)
functional_test.go:204: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-000838": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (106.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-004854 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E1025 00:49:11.579827    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-004854 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (1m46.0193886s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (106.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (44.07s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-004854 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-004854 addons enable ingress --alsologtostderr -v=5: (44.0679467s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (44.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-004854 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-004854 addons enable ingress-dns --alsologtostderr -v=5: (1.9776947s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-005211 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E1025 00:53:03.367234    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:03.379996    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:03.395695    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:03.427971    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:03.475037    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:03.557310    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:03.732221    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:04.054119    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:04.701499    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:05.995864    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:08.562268    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:13.692885    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:23.945824    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:53:44.441829    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-005211 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m40.4745477s)
--- PASS: TestJSONOutput/start/Command (100.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.96s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-005211 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-005211 --output=json --user=testUser: (1.9635302s)
--- PASS: TestJSONOutput/pause/Command (1.96s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.94s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-005211 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-005211 --output=json --user=testUser: (1.9423133s)
--- PASS: TestJSONOutput/unpause/Command (1.94s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-005211 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-005211 --output=json --user=testUser: (13.6611516s)
--- PASS: TestJSONOutput/stop/Command (13.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.75s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-005414 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-005414 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (380.6744ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"316fe024-9acc-4445-9ed8-2d875234b8ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-005414] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"58096a1d-c478-4261-a740-3d5d5caaf9db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube8\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"17d8fb52-4afb-4ae7-aa22-c0872eec18d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"ce6b2d22-3e04-4182-bc96-104952f149b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14956"}}
	{"specversion":"1.0","id":"e4460129-e8f5-4e46-a047-8e276e32ae2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2fea3d5-d88b-4c76-88fe-322b6fc89f23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-005414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-005414
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-005414: (1.3721535s)
--- PASS: TestErrorJSONOutput (1.75s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (86.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-005415 --network=
E1025 00:54:25.406387    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-005415 --network=: (1m20.7248339s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-005415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-005415
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-005415: (5.9339573s)
--- PASS: TestKicCustomNetwork/create_custom_network (86.88s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (85.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-005542 --network=bridge
E1025 00:55:47.343023    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 00:56:26.544391    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:26.559539    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:26.575182    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:26.605656    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:26.652930    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:26.745133    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:26.914459    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:27.240678    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:27.888968    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:29.176332    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:31.759849    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:36.895095    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:56:47.136792    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-005542 --network=bridge: (1m19.716866s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-005542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-005542
E1025 00:57:07.628278    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-005542: (5.3410395s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (85.24s)

                                                
                                    
x
+
TestKicExistingNetwork (86.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-005708 --network=existing-network
E1025 00:57:48.598559    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:58:03.359922    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-005708 --network=existing-network: (1m19.4340031s)
helpers_test.go:175: Cleaning up "existing-network-005708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-005708
E1025 00:58:31.193133    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-005708: (5.4086947s)
--- PASS: TestKicExistingNetwork (86.21s)

                                                
                                    
x
+
TestKicCustomSubnet (88.73s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-005834 --subnet=192.168.60.0/24
E1025 00:58:55.898359    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:59:10.525505    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 00:59:11.572199    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-005834 --subnet=192.168.60.0/24: (1m22.3631655s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-005834 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-005834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-005834
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-005834: (6.1638277s)
--- PASS: TestKicCustomSubnet (88.73s)

                                                
                                    
x
+
TestMainNoArgs (0.36s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.36s)

                                                
                                    
x
+
TestMinikubeProfile (194.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-010003 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-010003 --driver=docker: (1m19.0246897s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-010003 --driver=docker
E1025 01:01:26.544277    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 01:01:54.381126    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-010003 --driver=docker: (1m36.6027274s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-010003
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.3910206s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-010003
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E1025 01:03:03.364933    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.4060609s)
helpers_test.go:175: Cleaning up "second-010003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-010003
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-010003: (6.5378958s)
helpers_test.go:175: Cleaning up "first-010003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-010003
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-010003: (6.0866655s)
--- PASS: TestMinikubeProfile (194.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-010317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-010317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (20.9856355s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-010317 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-010317 ssh -- ls /minikube-host: (1.3096522s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-010317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-010317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (18.090692s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-010317 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-010317 ssh -- ls /minikube-host: (1.35099s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (4.51s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-010317 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-010317 --alsologtostderr -v=5: (4.5102389s)
--- PASS: TestMountStart/serial/DeleteFirst (4.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-010317 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-010317 ssh -- ls /minikube-host: (1.2772035s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.84s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-010317
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-010317: (2.8352738s)
--- PASS: TestMountStart/serial/Stop (2.84s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (13.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-010317
E1025 01:04:11.584070    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-010317: (12.8281807s)
--- PASS: TestMountStart/serial/RestartStopped (13.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-010317 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-010317 ssh -- ls /minikube-host: (1.3584086s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (189.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-010431 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E1025 01:06:26.537848    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-010431 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (3m7.2353184s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr: (2.3849191s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (189.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (12.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- rollout status deployment/busybox: (3.3763446s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-bd8kp -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-bd8kp -- nslookup kubernetes.io: (2.1420123s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-sh28r -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-sh28r -- nslookup kubernetes.io: (1.7482076s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-bd8kp -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-sh28r -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-bd8kp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-sh28r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (12.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-bd8kp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-bd8kp -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-sh28r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-010431 -- exec busybox-65db55d5d6-sh28r -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (3.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-010431 -v 3 --alsologtostderr
E1025 01:08:03.374093    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-010431 -v 3 --alsologtostderr: (57.1138402s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr: (3.468635s)
--- PASS: TestMultiNode/serial/AddNode (60.58s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.481541s)
--- PASS: TestMultiNode/serial/ProfileList (1.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (48.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 status --output json --alsologtostderr: (3.248345s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp testdata\cp-test.txt multinode-010431:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp testdata\cp-test.txt multinode-010431:/home/docker/cp-test.txt: (1.450266s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt": (1.6376748s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile2072384243\001\cp-test_multinode-010431.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile2072384243\001\cp-test_multinode-010431.txt: (1.3418732s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt": (1.3635981s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431:/home/docker/cp-test.txt multinode-010431-m02:/home/docker/cp-test_multinode-010431_multinode-010431-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431:/home/docker/cp-test.txt multinode-010431-m02:/home/docker/cp-test_multinode-010431_multinode-010431-m02.txt: (1.9928942s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt": (1.3914275s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test_multinode-010431_multinode-010431-m02.txt"
E1025 01:09:11.574607    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test_multinode-010431_multinode-010431-m02.txt": (1.405563s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431:/home/docker/cp-test.txt multinode-010431-m03:/home/docker/cp-test_multinode-010431_multinode-010431-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431:/home/docker/cp-test.txt multinode-010431-m03:/home/docker/cp-test_multinode-010431_multinode-010431-m03.txt: (1.9393482s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test.txt": (1.3493042s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test_multinode-010431_multinode-010431-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test_multinode-010431_multinode-010431-m03.txt": (1.3298477s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp testdata\cp-test.txt multinode-010431-m02:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp testdata\cp-test.txt multinode-010431-m02:/home/docker/cp-test.txt: (1.4254391s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt": (1.3871905s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile2072384243\001\cp-test_multinode-010431-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile2072384243\001\cp-test_multinode-010431-m02.txt: (1.3974333s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt": (1.442089s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m02:/home/docker/cp-test.txt multinode-010431:/home/docker/cp-test_multinode-010431-m02_multinode-010431.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m02:/home/docker/cp-test.txt multinode-010431:/home/docker/cp-test_multinode-010431-m02_multinode-010431.txt: (2.0024485s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt": (1.4066596s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test_multinode-010431-m02_multinode-010431.txt"
E1025 01:09:26.564225    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test_multinode-010431-m02_multinode-010431.txt": (1.3934072s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m02:/home/docker/cp-test.txt multinode-010431-m03:/home/docker/cp-test_multinode-010431-m02_multinode-010431-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m02:/home/docker/cp-test.txt multinode-010431-m03:/home/docker/cp-test_multinode-010431-m02_multinode-010431-m03.txt: (1.994145s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test.txt": (1.3913108s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test_multinode-010431-m02_multinode-010431-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test_multinode-010431-m02_multinode-010431-m03.txt": (1.415658s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp testdata\cp-test.txt multinode-010431-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp testdata\cp-test.txt multinode-010431-m03:/home/docker/cp-test.txt: (1.3927154s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt": (1.39659s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile2072384243\001\cp-test_multinode-010431-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\TestMultiNodeserialCopyFile2072384243\001\cp-test_multinode-010431-m03.txt: (1.3862536s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt": (1.3639014s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m03:/home/docker/cp-test.txt multinode-010431:/home/docker/cp-test_multinode-010431-m03_multinode-010431.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m03:/home/docker/cp-test.txt multinode-010431:/home/docker/cp-test_multinode-010431-m03_multinode-010431.txt: (1.9562883s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt": (1.3680849s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test_multinode-010431-m03_multinode-010431.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431 "sudo cat /home/docker/cp-test_multinode-010431-m03_multinode-010431.txt": (1.464462s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m03:/home/docker/cp-test.txt multinode-010431-m02:/home/docker/cp-test_multinode-010431-m03_multinode-010431-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 cp multinode-010431-m03:/home/docker/cp-test.txt multinode-010431-m02:/home/docker/cp-test_multinode-010431-m03_multinode-010431-m02.txt: (1.9388712s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m03 "sudo cat /home/docker/cp-test.txt": (1.4130457s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test_multinode-010431-m03_multinode-010431-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 ssh -n multinode-010431-m02 "sudo cat /home/docker/cp-test_multinode-010431-m03_multinode-010431-m02.txt": (1.3979372s)
--- PASS: TestMultiNode/serial/CopyFile (48.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (8.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 node stop m03: (2.7109157s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-010431 status: exit status 7 (2.6419172s)

                                                
                                                
-- stdout --
	multinode-010431
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-010431-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-010431-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr: exit status 7 (2.7108445s)

                                                
                                                
-- stdout --
	multinode-010431
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-010431-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-010431-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 01:09:53.379555    3632 out.go:296] Setting OutFile to fd 732 ...
	I1025 01:09:53.443560    3632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:09:53.443560    3632 out.go:309] Setting ErrFile to fd 608...
	I1025 01:09:53.443560    3632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:09:53.454562    3632 out.go:303] Setting JSON to false
	I1025 01:09:53.454562    3632 mustload.go:65] Loading cluster: multinode-010431
	I1025 01:09:53.454562    3632 notify.go:220] Checking for updates...
	I1025 01:09:53.454562    3632 config.go:180] Loaded profile config "multinode-010431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:09:53.455559    3632 status.go:255] checking status of multinode-010431 ...
	I1025 01:09:53.470565    3632 cli_runner.go:164] Run: docker container inspect multinode-010431 --format={{.State.Status}}
	I1025 01:09:53.688723    3632 status.go:330] multinode-010431 host status = "Running" (err=<nil>)
	I1025 01:09:53.688723    3632 host.go:66] Checking if "multinode-010431" exists ...
	I1025 01:09:53.697048    3632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-010431
	I1025 01:09:53.891043    3632 host.go:66] Checking if "multinode-010431" exists ...
	I1025 01:09:53.908110    3632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:09:53.915044    3632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-010431
	I1025 01:09:54.095204    3632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63672 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-010431\id_rsa Username:docker}
	I1025 01:09:54.282006    3632 ssh_runner.go:195] Run: systemctl --version
	I1025 01:09:54.304997    3632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:09:54.390250    3632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-010431
	I1025 01:09:54.598588    3632 kubeconfig.go:92] found "multinode-010431" server: "https://127.0.0.1:63671"
	I1025 01:09:54.598659    3632 api_server.go:165] Checking apiserver status ...
	I1025 01:09:54.608946    3632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 01:09:54.650242    3632 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1891/cgroup
	I1025 01:09:54.673667    3632 api_server.go:181] apiserver freezer: "7:freezer:/docker/61296b3dd66fc32286c65f89626afafce8aa27c861e7afc9baaebf8d718cbcfb/kubepods/burstable/podd301624a1d04425a1707d648c6d40492/4ec5ff9b8ae0a98e31d14e559795104508b760f09a3acdc65f925170f3fa5f20"
	I1025 01:09:54.684549    3632 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/61296b3dd66fc32286c65f89626afafce8aa27c861e7afc9baaebf8d718cbcfb/kubepods/burstable/podd301624a1d04425a1707d648c6d40492/4ec5ff9b8ae0a98e31d14e559795104508b760f09a3acdc65f925170f3fa5f20/freezer.state
	I1025 01:09:54.715793    3632 api_server.go:203] freezer state: "THAWED"
	I1025 01:09:54.715793    3632 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63671/healthz ...
	I1025 01:09:54.733442    3632 api_server.go:278] https://127.0.0.1:63671/healthz returned 200:
	ok
	I1025 01:09:54.734441    3632 status.go:421] multinode-010431 apiserver status = Running (err=<nil>)
	I1025 01:09:54.734441    3632 status.go:257] multinode-010431 status: &{Name:multinode-010431 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 01:09:54.734441    3632 status.go:255] checking status of multinode-010431-m02 ...
	I1025 01:09:54.749797    3632 cli_runner.go:164] Run: docker container inspect multinode-010431-m02 --format={{.State.Status}}
	I1025 01:09:54.959185    3632 status.go:330] multinode-010431-m02 host status = "Running" (err=<nil>)
	I1025 01:09:54.959221    3632 host.go:66] Checking if "multinode-010431-m02" exists ...
	I1025 01:09:54.967935    3632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-010431-m02
	I1025 01:09:55.161804    3632 host.go:66] Checking if "multinode-010431-m02" exists ...
	I1025 01:09:55.172341    3632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 01:09:55.179664    3632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-010431-m02
	I1025 01:09:55.379281    3632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63741 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-010431-m02\id_rsa Username:docker}
	I1025 01:09:55.526172    3632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 01:09:55.560606    3632 status.go:257] multinode-010431-m02 status: &{Name:multinode-010431-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 01:09:55.560703    3632 status.go:255] checking status of multinode-010431-m03 ...
	I1025 01:09:55.581524    3632 cli_runner.go:164] Run: docker container inspect multinode-010431-m03 --format={{.State.Status}}
	I1025 01:09:55.816007    3632 status.go:330] multinode-010431-m03 host status = "Stopped" (err=<nil>)
	I1025 01:09:55.816149    3632 status.go:343] host is not running, skipping remaining checks
	I1025 01:09:55.816149    3632 status.go:257] multinode-010431-m03 status: &{Name:multinode-010431-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (8.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 node start m03 --alsologtostderr: (30.6645444s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 status: (3.2965594s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (142.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-010431
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-010431
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-010431: (27.4328444s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-010431 --wait=true -v=8 --alsologtostderr
E1025 01:11:26.542532    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 01:12:49.752737    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-010431 --wait=true -v=8 --alsologtostderr: (1m54.7303692s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-010431
--- PASS: TestMultiNode/serial/RestartKeepsNodes (142.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (14.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 node delete m03: (9.805173s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr
E1025 01:13:03.370513    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr: (2.4368084s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (2.3110835s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (14.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 stop
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 stop: (25.2378092s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-010431 status: exit status 7 (733.3586ms)

                                                
                                                
-- stdout --
	multinode-010431
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-010431-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr: exit status 7 (791.6613ms)

                                                
                                                
-- stdout --
	multinode-010431
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-010431-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 01:13:34.323572    8984 out.go:296] Setting OutFile to fd 864 ...
	I1025 01:13:34.379695    8984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:13:34.379695    8984 out.go:309] Setting ErrFile to fd 452...
	I1025 01:13:34.379695    8984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 01:13:34.389697    8984 out.go:303] Setting JSON to false
	I1025 01:13:34.389697    8984 mustload.go:65] Loading cluster: multinode-010431
	I1025 01:13:34.389697    8984 notify.go:220] Checking for updates...
	I1025 01:13:34.390697    8984 config.go:180] Loaded profile config "multinode-010431": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1025 01:13:34.390697    8984 status.go:255] checking status of multinode-010431 ...
	I1025 01:13:34.404690    8984 cli_runner.go:164] Run: docker container inspect multinode-010431 --format={{.State.Status}}
	I1025 01:13:34.622626    8984 status.go:330] multinode-010431 host status = "Stopped" (err=<nil>)
	I1025 01:13:34.622660    8984 status.go:343] host is not running, skipping remaining checks
	I1025 01:13:34.622708    8984 status.go:257] multinode-010431 status: &{Name:multinode-010431 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 01:13:34.622741    8984 status.go:255] checking status of multinode-010431-m02 ...
	I1025 01:13:34.639423    8984 cli_runner.go:164] Run: docker container inspect multinode-010431-m02 --format={{.State.Status}}
	I1025 01:13:34.853810    8984 status.go:330] multinode-010431-m02 host status = "Stopped" (err=<nil>)
	I1025 01:13:34.853810    8984 status.go:343] host is not running, skipping remaining checks
	I1025 01:13:34.853810    8984 status.go:257] multinode-010431-m02 status: &{Name:multinode-010431-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (112.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-010431 --wait=true -v=8 --alsologtostderr --driver=docker
E1025 01:14:11.586035    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-010431 --wait=true -v=8 --alsologtostderr --driver=docker: (1m49.3075517s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-010431 status --alsologtostderr: (2.5772836s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (112.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (87.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-010431
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-010431-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-010431-m02 --driver=docker: exit status 14 (406.9146ms)

                                                
                                                
-- stdout --
	* [multinode-010431-m02] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-010431-m02' is duplicated with machine name 'multinode-010431-m02' in profile 'multinode-010431'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-010431-m03 --driver=docker
E1025 01:15:35.913541    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 01:16:26.543047    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-010431-m03 --driver=docker: (1m17.5017532s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-010431
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-010431: exit status 80 (2.4377343s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-010431
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-010431-m03 already exists in multinode-010431-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_node_faf4be2af32ab6d64b40fb15c6239eaae2a98ae3_98.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-010431-m03
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-010431-m03: (6.8304653s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (87.51s)

                                                
                                    
x
+
TestPreload (259.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-011708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E1025 01:18:03.368882    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 01:19:11.588465    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-011708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m6.6160503s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-011708 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-011708 -- docker pull gcr.io/k8s-minikube/busybox: (2.7355887s)
preload_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-011708 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-011708 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.24.6: (2m1.6400683s)
preload_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-011708 -- docker images
preload_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-011708 -- docker images: (2.0569521s)
helpers_test.go:175: Cleaning up "test-preload-011708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-011708
E1025 01:21:26.550185    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-011708: (6.3169141s)
--- PASS: TestPreload (259.37s)

                                                
                                    
x
+
TestScheduledStopWindows (155.49s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-012128 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-012128 --memory=2048 --driver=docker: (1m22.0904692s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-012128 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-012128 --schedule 5m: (1.6163092s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-012128 -n scheduled-stop-012128
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-012128 -n scheduled-stop-012128: (1.5638359s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-012128 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-012128 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.4013782s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-012128 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-012128 --schedule 5s: (2.8393142s)
E1025 01:23:03.379765    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-012128
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-012128: exit status 7 (561.541ms)

                                                
                                                
-- stdout --
	scheduled-stop-012128
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-012128 -n scheduled-stop-012128
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-012128 -n scheduled-stop-012128: exit status 7 (555.9873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-012128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-012128
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-012128: (4.8462639s)
--- PASS: TestScheduledStopWindows (155.49s)

                                                
                                    
x
+
TestInsufficientStorage (52.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-012403 --memory=2048 --output=json --wait=true --driver=docker
E1025 01:24:11.588874    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-012403 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (45.10201s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0f556ec1-2c7b-4f61-a9d9-64177545c7d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-012403] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff5328c0-035d-41f8-88db-22b99347283f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube8\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"213837e1-49b8-448d-8ad9-7aaec8fc6056","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"04361ccc-d84c-45ba-a921-964dca33b099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14956"}}
	{"specversion":"1.0","id":"b33ffffc-a948-4c40-89b4-47f1d25e043c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd2855a7-4d77-4327-97ef-b9f9dea27eb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"aec2fded-91a5-46ca-bc1b-c6d5ee51e0c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"216038be-4142-4e70-be1c-e8807c37a034","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a32b0300-d82b-4538-9b48-7b424715a4cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"d6bc619e-0cf2-4019-966d-f488c76acefa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-012403 in cluster insufficient-storage-012403","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"09e6fffc-fe66-499f-8e65-c175278e9395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba658c3b-f265-4254-8f47-72b4ed90f7c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdfc824d-ef7d-49c4-9518-1a203778f6f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-012403 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-012403 --output=json --layout=cluster: exit status 7 (1.4307278s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-012403","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-012403","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 01:24:50.192692    4340 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-012403" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-012403 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-012403 --output=json --layout=cluster: exit status 7 (1.3834712s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-012403","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-012403","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 01:24:51.566059    9904 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-012403" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	E1025 01:24:51.608734    9904 status.go:559] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\insufficient-storage-012403\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-012403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-012403
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-012403: (4.8652079s)
--- PASS: TestInsufficientStorage (52.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (223.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.160755174.exe start -p running-upgrade-012958 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.160755174.exe start -p running-upgrade-012958 --memory=2200 --vm-driver=docker: (2m3.4040406s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-012958 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-012958 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m32.3319472s)
helpers_test.go:175: Cleaning up "running-upgrade-012958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-012958

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-012958: (7.2210277s)
--- PASS: TestRunningBinaryUpgrade (223.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (345.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (2m13.0878784s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-012935

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-012935: (5.6209176s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-012935 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-012935 status --format={{.Host}}: exit status 7 (650.0863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker: (1m22.5122559s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-012935 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (380.5307ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-012935] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-012935
	    minikube start -p kubernetes-upgrade-012935 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0129352 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-012935 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-012935 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker: (1m49.1309407s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-012935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-012935
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-012935: (13.6777294s)
--- PASS: TestKubernetesUpgrade (345.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (271.18s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.1.2483850866.exe start -p missing-upgrade-012926 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.1.2483850866.exe start -p missing-upgrade-012926 --memory=2200 --driver=docker: (2m28.8392911s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-012926

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-012926: (17.0713994s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-012926
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-012926 --memory=2200 --alsologtostderr -v=1 --driver=docker
E1025 01:32:15.932200    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 01:33:03.374862    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-012926 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m26.6571897s)
helpers_test.go:175: Cleaning up "missing-upgrade-012926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-012926

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-012926: (17.6934526s)
--- PASS: TestMissingContainerUpgrade (271.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (541.5601ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-012456] minikube v1.27.1 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14956
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.54s)

                                                
                                    
x
+
TestPause/serial/Start (159.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-012456 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-012456 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m39.4249204s)
--- PASS: TestPause/serial/Start (159.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (207.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --driver=docker: (3m25.6519906s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-012456 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-012456 status -o json: (1.7768172s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (207.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (260.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3122540519.exe start -p stopped-upgrade-012456 --memory=2200 --vm-driver=docker
E1025 01:26:06.575074    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 01:26:26.544207    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3122540519.exe start -p stopped-upgrade-012456 --memory=2200 --vm-driver=docker: (3m4.3166209s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3122540519.exe -p stopped-upgrade-012456 stop
E1025 01:28:03.385667    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.3122540519.exe -p stopped-upgrade-012456 stop: (13.3641243s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-012456 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-012456 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m3.2291425s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (260.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-012456 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-012456 --alsologtostderr -v=1 --driver=docker: (55.2553767s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (55.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --no-kubernetes --driver=docker: (19.9035897s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-012456 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-012456 status -o json: exit status 2 (1.7185127s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-012456","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-012456
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-012456: (5.478146s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (22.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --no-kubernetes --driver=docker: (22.3993231s)
--- PASS: TestNoKubernetes/serial/Start (22.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-012456 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-012456 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.6103538s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (9.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (3.900084s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (5.7732276s)
--- PASS: TestNoKubernetes/serial/ProfileList (9.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-012456

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-012456: (9.9792179s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-012456

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-012456: (4.1909563s)
--- PASS: TestNoKubernetes/serial/Stop (4.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (18.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --driver=docker
E1025 01:29:29.771482    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-012456 --driver=docker: (18.4661506s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (18.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-012456 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-012456 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.5317602s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (181.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-013521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-013521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (3m1.8838383s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (181.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (164.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-013544 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-013544 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3: (2m44.3242387s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (164.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (136.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-013544 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3
E1025 01:36:26.549425    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-013544 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3: (2m16.5720672s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (136.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-013732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-013732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3: (1m42.7002092s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-013544 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-013544 create -f testdata\busybox.yaml: (1.1082158s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2b01cd07-a3cb-4bdc-b3f9-da2280fbd771] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1025 01:38:03.388003    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
helpers_test.go:342: "busybox" [2b01cd07-a3cb-4bdc-b3f9-da2280fbd771] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0733128s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-013544 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-013544 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-013544 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1630359s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-013544 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-013544 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-013544 --alsologtostderr -v=3: (13.9154921s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-013521 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [3064da1c-1031-41ce-8a9b-17418363c830] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [3064da1c-1031-41ce-8a9b-17418363c830] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.049836s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-013521 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-013544 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [60e494fa-be11-490b-983c-a09a72c9a91d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:342: "busybox" [60e494fa-be11-490b-983c-a09a72c9a91d] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0405212s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-013544 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-013544 -n embed-certs-013544
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-013544 -n embed-certs-013544: exit status 7 (636.3498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-013544 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (348.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-013544 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-013544 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3: (5m45.5920195s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-013544 -n embed-certs-013544
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-013544 -n embed-certs-013544: (2.550458s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (348.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-013521 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-013521 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.2453908s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-013521 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-013521 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-013521 --alsologtostderr -v=3: (13.3386293s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-013544 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-013544 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.4208979s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-013544 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-013544 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-013544 --alsologtostderr -v=3: (13.883112s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-013521 -n old-k8s-version-013521
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-013521 -n old-k8s-version-013521: exit status 7 (564.863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-013521 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (431.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-013521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-013521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m9.8960406s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-013521 -n old-k8s-version-013521

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-013521 -n old-k8s-version-013521: (1.9157274s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (431.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-013544 -n no-preload-013544
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-013544 -n no-preload-013544: exit status 7 (624.5396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-013544 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (369.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-013544 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3
E1025 01:39:11.599103    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-013544 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3: (6m6.9263359s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-013544 -n no-preload-013544

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-013544 -n no-preload-013544: (2.4130543s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (369.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-013732 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [731df990-b545-43b5-8e11-23c365d161b9] Pending
helpers_test.go:342: "busybox" [731df990-b545-43b5-8e11-23c365d161b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [731df990-b545-43b5-8e11-23c365d161b9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0601927s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-013732 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-013732 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-013732 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.7129561s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-013732 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-013732 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-013732 --alsologtostderr -v=3: (13.7913673s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732: exit status 7 (613.8569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-013732 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (361.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-013732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3
E1025 01:41:26.551425    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
E1025 01:42:46.587729    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 01:43:03.380262    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
E1025 01:44:11.590008    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-013732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3: (5m58.5203807s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732: (3.0500905s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (361.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (26.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vv9xp" [8525843f-e604-408d-bd8c-532f71001524] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vv9xp" [8525843f-e604-408d-bd8c-532f71001524] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 26.0497114s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (26.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vv9xp" [8525843f-e604-408d-bd8c-532f71001524] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.036481s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-013544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-013544 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-013544 "sudo crictl images -o json": (1.730636s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (1.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (13.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-013544 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-013544 --alsologtostderr -v=1: (2.9084189s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-013544 -n embed-certs-013544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-013544 -n embed-certs-013544: exit status 2 (1.6704246s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-013544 -n embed-certs-013544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-013544 -n embed-certs-013544: exit status 2 (1.7979109s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-013544 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-013544 --alsologtostderr -v=1: (2.7915577s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-013544 -n embed-certs-013544

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-013544 -n embed-certs-013544: (2.3357742s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-013544 -n embed-certs-013544
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-013544 -n embed-certs-013544: (2.3170606s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (13.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (30.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-2gxfw" [de35ef13-3cfc-4b15-9e45-33794b90639d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-2gxfw" [de35ef13-3cfc-4b15-9e45-33794b90639d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 30.4206964s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (30.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (156.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-014519 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-014519 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3: (2m36.3513778s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (156.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-2gxfw" [de35ef13-3cfc-4b15-9e45-33794b90639d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0840444s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-013544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-013544 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-013544 "sudo crictl images -o json": (2.4234617s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (18.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-013544 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-013544 --alsologtostderr -v=1: (3.3084206s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-013544 -n no-preload-013544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-013544 -n no-preload-013544: exit status 2 (2.1071942s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-013544 -n no-preload-013544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-013544 -n no-preload-013544: exit status 2 (1.742006s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-013544 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-013544 --alsologtostderr -v=1: (7.0029513s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-013544 -n no-preload-013544

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-013544 -n no-preload-013544: (2.3056351s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-013544 -n no-preload-013544

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-013544 -n no-preload-013544: (2.0905844s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (18.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (47.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-cdq8r" [a5b5ec86-58db-49e0-b9f4-52276d4e8b94] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-cdq8r" [a5b5ec86-58db-49e0-b9f4-52276d4e8b94] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 47.0870521s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (47.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-2crrs" [808d9838-9cbe-41ba-9c00-9012e6d99bea] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0610551s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (17.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-2crrs" [808d9838-9cbe-41ba-9c00-9012e6d99bea] Running
E1025 01:46:09.779444    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.0540297s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-013521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (17.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-013521 "sudo crictl images -o json"
E1025 01:46:26.559863    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-004854\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-013521 "sudo crictl images -o json": (1.7675312s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-013521 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-013521 --alsologtostderr -v=1: (2.9139844s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013521 -n old-k8s-version-013521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013521 -n old-k8s-version-013521: exit status 2 (1.5644124s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013521 -n old-k8s-version-013521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013521 -n old-k8s-version-013521: exit status 2 (1.639063s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-013521 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-013521 --alsologtostderr -v=1: (2.3281682s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013521 -n old-k8s-version-013521
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-013521 -n old-k8s-version-013521: (2.1166059s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013521 -n old-k8s-version-013521

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-013521 -n old-k8s-version-013521: (1.9234872s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (124.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (2m4.8975712s)
--- PASS: TestNetworkPlugins/group/auto/Start (124.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-cdq8r" [a5b5ec86-58db-49e0-b9f4-52276d4e8b94] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0696207s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-013732 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-013732 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-013732 "sudo crictl images -o json": (1.9312172s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (24.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-013732 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-013732 --alsologtostderr -v=1: (2.9576233s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732: exit status 2 (1.6713889s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732: exit status 2 (1.6017453s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-013732 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-013732 --alsologtostderr -v=1: (8.8134746s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732: (7.2374211s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-013732 -n default-k8s-diff-port-013732: (1.8381658s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (24.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-014519 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-014519 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.2287005s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-014519 --alsologtostderr -v=3
E1025 01:48:03.384362    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-000838\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-014519 --alsologtostderr -v=3: (5.496502s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519: exit status 7 (661.0523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-014519 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-014519 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3
E1025 01:48:23.707602    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:23.722927    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:23.738904    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:23.769905    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:23.817579    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:23.912557    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:24.085639    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:24.417038    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:25.061374    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:26.347763    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:28.909652    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:29.569285    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:29.584593    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:29.600579    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:29.631288    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:29.679934    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:29.772981    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:29.945180    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:30.274880    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:30.917870    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:32.200204    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-014519 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3: (48.4362052s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519
E1025 01:48:55.952025    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-014519 -n newest-cni-014519: (2.1939215s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-012955 "pgrep -a kubelet"
E1025 01:48:34.042876    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:34.765342    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-012955 "pgrep -a kubelet": (1.583662s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (22.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-012955 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-lckgp" [c7a42b08-dfd1-44ba-b869-df70f3b9712a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 01:48:39.886374    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
E1025 01:48:44.284986    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-013521\client.crt: The system cannot find the path specified.
E1025 01:48:50.130540    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-013544\client.crt: The system cannot find the path specified.
helpers_test.go:342: "netcat-5788d667bd-lckgp" [c7a42b08-dfd1-44ba-b869-df70f3b9712a] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 22.0489313s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (22.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-014519 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-014519 "sudo crictl images -o json": (2.4838935s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-012955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.7761973s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (362.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-012957 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker
E1025 01:49:20.900991    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p false-012957 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (6m2.1025619s)
--- PASS: TestNetworkPlugins/group/false/Start (362.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (362.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
E1025 01:54:58.064655    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (6m2.503569s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (362.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-012957 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-012957 "pgrep -a kubelet": (1.5137052s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (21.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-012957 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-qqd8q" [3b146470-fbd5-4993-ae8e-52fd5a593be3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-qqd8q" [3b146470-fbd5-4993-ae8e-52fd5a593be3] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 21.0543508s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (21.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (354.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (5m54.7734691s)
--- PASS: TestNetworkPlugins/group/bridge/Start (354.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (97.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-012955 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (1m37.7277687s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (97.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-012955 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-012955 "pgrep -a kubelet": (1.4077481s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (20.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-012955 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-5q7mr" [746f643a-2808-40a9-b90e-d857883ef75a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 01:59:03.838586    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-012955\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-5q7mr" [746f643a-2808-40a9-b90e-d857883ef75a] Running
E1025 01:59:15.691852    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\default-k8s-diff-port-013732\client.crt: The system cannot find the path specified.
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 19.0361018s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (20.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-012955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-012955 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-012955 "pgrep -a kubelet": (1.390612s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-012955 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-wfwj4" [de707c5c-7b61-4aa4-a965-407022948233] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-wfwj4" [de707c5c-7b61-4aa4-a965-407022948233] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 20.0877649s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-012955 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-012955 "pgrep -a kubelet": (1.3731206s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (25.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-012955 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-k2d9w" [9db12fa6-9ed1-4bf3-a097-5f0cb7be507c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-k2d9w" [9db12fa6-9ed1-4bf3-a097-5f0cb7be507c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 25.0377084s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (25.75s)

                                                
                                    

Test skip (25/265)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 25.0507ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-npg2s" [7cd06dc8-80c8-4a76-ad5b-4ab6424fdbf6] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.1765999s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-hs9bw" [44014e0b-5a42-462b-9880-e4dcdcac64bd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.110278s
addons_test.go:292: (dbg) Run:  kubectl --context addons-235811 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-235811 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-235811 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (13.2921057s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (23.94s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (42.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-235811 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-235811 replace --force -f testdata\nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:184: (dbg) Done: kubectl --context addons-235811 replace --force -f testdata\nginx-ingress-v1.yaml: (4.125472s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-235811 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-235811 replace --force -f testdata\nginx-pod-svc.yaml: (1.6988756s)
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [3aee9fd0-03f6-4364-acd2-d1237b56681a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [3aee9fd0-03f6-4364-acd2-d1237b56681a] Running
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 35.2803114s
addons_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-235811 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:214: (dbg) Done: out/minikube-windows-amd64.exe -p addons-235811 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.4225007s)
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (42.91s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-000838 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:909: output didn't produce a URL
functional_test.go:903: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-000838 --alsologtostderr -v=1] ...
helpers_test.go:500: unable to terminate pid 200: Access is denied.
E1025 00:24:11.559559    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:25:35.870408    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:29:11.561663    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:34:11.568757    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:39:11.562156    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:42:15.890569    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
E1025 00:44:11.564917    4200 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-235811\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (49.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-000838 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-000838 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-dqbjm" [60be234e-9cfb-4d1e-a73b-684ade4e8ec4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-dqbjm" [60be234e-9cfb-4d1e-a73b-684ade4e8ec4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 49.1930957s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (49.96s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (37.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:164: (dbg) Run:  kubectl --context ingress-addon-legacy-004854 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:164: (dbg) Done: kubectl --context ingress-addon-legacy-004854 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.0714455s)
addons_test.go:184: (dbg) Run:  kubectl --context ingress-addon-legacy-004854 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-004854 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:197: (dbg) Done: kubectl --context ingress-addon-legacy-004854 replace --force -f testdata\nginx-pod-svc.yaml: (1.1990688s)
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [2eacf961-93d2-4797-97ba-a9981683631f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [2eacf961-93d2-4797-97ba-a9981683631f] Running
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 23.1841624s
addons_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-004854 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:214: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-004854 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.3953318s)
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (37.87s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-013730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-013730
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-013730: (1.5297553s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (1.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-012955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-012955
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-012955: (1.8011858s)
--- SKIP: TestNetworkPlugins/group/flannel (1.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (1.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-012957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-012957
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-012957: (1.6988226s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (1.70s)

                                                
                                    
Copied to clipboard