Test Report: Docker_Windows 15642

                    
                      4cf467cecc4d49355139c24bc1420f3978a367dd:2023-01-14:27426
                    
                

Test fail (6/280)

Order failed test Duration
82 TestFunctional/parallel/ServiceCmd 2165.43
308 TestNetworkPlugins/group/cilium/Start 574.04
318 TestNetworkPlugins/group/calico/Start 613.05
330 TestNetworkPlugins/group/false/DNS 341.51
333 TestNetworkPlugins/group/bridge/DNS 330.54
344 TestNetworkPlugins/group/kubenet/HairPin 62.07
x
+
TestFunctional/parallel/ServiceCmd (2165.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-102159 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-102159 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-m9lg9" [80b0312d-0143-4562-a5aa-6101d62dda34] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-m9lg9" [80b0312d-0143-4562-a5aa-6101d62dda34] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 28.0936468s
functional_test.go:1449: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 service list
functional_test.go:1449: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 service list: (2.0218423s)
functional_test.go:1463: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1392: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-102159 service --namespace=default --https --url hello-node: exit status 1 (35m26.1854961s)

                                                
                                                
-- stdout --
	https://127.0.0.1:62653

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1465: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-102159 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1402: service test failed - dumping debug information
functional_test.go:1403: -----------------------service failure post-mortem--------------------------------
functional_test.go:1406: (dbg) Run:  kubectl --context functional-102159 describe po hello-node
functional_test.go:1410: hello-node pod describe:
Name:         hello-node-5fcdfb5cc4-m9lg9
Namespace:    default
Priority:     0
Node:         functional-102159/192.168.49.2
Start Time:   Sat, 14 Jan 2023 10:26:02 +0000
Labels:       app=hello-node
pod-template-hash=5fcdfb5cc4
Annotations:  <none>
Status:       Running
IP:           172.17.0.3
IPs:
IP:           172.17.0.3
Controlled By:  ReplicaSet/hello-node-5fcdfb5cc4
Containers:
echoserver:
Container ID:   docker://126fb125f52e4640dc9a13d87a2f5c93d62a67a35b62cd3539c0c64adafd6778
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Sat, 14 Jan 2023 10:26:24 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w4s7s (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-w4s7s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                        Message
----    ------     ----       ----                        -------
Normal  Scheduled  <unknown>                              Successfully assigned default/hello-node-5fcdfb5cc4-m9lg9 to functional-102159
Normal  Pulling    35m        kubelet, functional-102159  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     35m        kubelet, functional-102159  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 19.3431569s
Normal  Created    35m        kubelet, functional-102159  Created container echoserver
Normal  Started    35m        kubelet, functional-102159  Started container echoserver

                                                
                                                
Name:         hello-node-connect-6458c8fb6f-5bzgt
Namespace:    default
Priority:     0
Node:         functional-102159/192.168.49.2
Start Time:   Sat, 14 Jan 2023 10:26:42 +0000
Labels:       app=hello-node-connect
pod-template-hash=6458c8fb6f
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-connect-6458c8fb6f
Containers:
echoserver:
Container ID:   docker://d5073e9565a3df99b65672acfc14d9c0f95dda871b5dcfd3cbcdd386982c4b3c
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Sat, 14 Jan 2023 10:26:44 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k9jn (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-9k9jn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                        Message
----    ------     ----       ----                        -------
Normal  Scheduled  <unknown>                              Successfully assigned default/hello-node-connect-6458c8fb6f-5bzgt to functional-102159
Normal  Pulled     35m        kubelet, functional-102159  Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal  Created    35m        kubelet, functional-102159  Created container echoserver
Normal  Started    35m        kubelet, functional-102159  Started container echoserver

                                                
                                                
functional_test.go:1412: (dbg) Run:  kubectl --context functional-102159 logs -l app=hello-node
functional_test.go:1416: hello-node logs:
functional_test.go:1418: (dbg) Run:  kubectl --context functional-102159 describe svc hello-node
functional_test.go:1422: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.103.134.250
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32206/TCP
Endpoints:                172.17.0.3:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-102159
helpers_test.go:235: (dbg) docker inspect functional-102159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d",
	        "Created": "2023-01-14T10:22:37.0109251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:22:37.953171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/hosts",
	        "LogPath": "/var/lib/docker/containers/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d/0013e6d15176e4c1f5a008ac0f8604b5c035cd30c23d91a93c7ea19aea7e899d-json.log",
	        "Name": "/functional-102159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-102159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-102159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534-init/diff:/var/lib/docker/overlay2/0319fc5680615c1d80ed1d2dd4ec2e28e4900e96dc89cfb3186ce0daa2f0c798/diff:/var/lib/docker/overlay2/641c0be8e9dcba148f49ccaf3907690f03e68a0453054a018cc2d8d554ceb228/diff:/var/lib/docker/overlay2/f192bff258f753c4dd8a9584547f6a30e0c3ff7a8ee0308be3e0e487487f1745/diff:/var/lib/docker/overlay2/9263b14da091cc57f0e2b54392ed379e6a1aac266f9a63e30675e9c43e7588a0/diff:/var/lib/docker/overlay2/aecae5087861fb71f7cf3b407e0830d3a2f5427641436c01100110504ebcd43b/diff:/var/lib/docker/overlay2/1cceb96d81956493568b9f4730cafe1ed39e522a1f60ed52ed7d0e4eb8abce3a/diff:/var/lib/docker/overlay2/ec4719e9a881a9f037bdfbede7568a0aba53797f4d2ef6c8a02b498f4495698a/diff:/var/lib/docker/overlay2/f75fa843f23c738b5f0bd4ffbb5174ea490a1cd333873e37a6ed5ad14d38a9d1/diff:/var/lib/docker/overlay2/75422af761b92f586be003f28ac3a39f970908caa891178940eba46ec4806383/diff:/var/lib/docker/overlay2/316756
87e92fb1275a51786e88c59966ec6bc403cf768ab98561f8d3926ee5d8/diff:/var/lib/docker/overlay2/2932be832d371e1053d62c6d513f4d77c43d23463aaaf0d98aee67b3f7602540/diff:/var/lib/docker/overlay2/2c44356fce90aac744db8a0a4035502d16fea1da630ce147e088e157dceef923/diff:/var/lib/docker/overlay2/9b03cc19f9697ba0d664ef1eb34ddfc1e549e9031135b5b8b1dafa7454d399cc/diff:/var/lib/docker/overlay2/baab503d3a26d91d289e28c73212f5271a16469af630e973714bf9b7acc2a206/diff:/var/lib/docker/overlay2/d0e4408ba7d017cf5e4d2739628754664bdea24b7186b7efc267ec03a54d2283/diff:/var/lib/docker/overlay2/9accf9e797118b7b88e2270d1f022551d5e9bc77097e49e8c5acca75fafbe2a9/diff:/var/lib/docker/overlay2/e05ce2bcfac188595f4eb1980cf86c681398ab65fe6bc25d8129f27a6c87b8a9/diff:/var/lib/docker/overlay2/c052f07688bdd2e85ca704f876a24ebf259bf7fe7fc122473d68e5b5f4a37b52/diff:/var/lib/docker/overlay2/950ed07c617e47c75b85bb70ea3d5b83db751b0dd05915f89163962e0966ad88/diff:/var/lib/docker/overlay2/48c61d81c7eaa0e571038e496be8e54495cffce611a2c5591fa5c698eeaab5ec/diff:/var/lib/d
ocker/overlay2/85cb428a7ca5bc60e99f20c6e6851d70c9cea3c26ac86817831d4d8bd130bb13/diff:/var/lib/docker/overlay2/9ca4a5e53e6f5a7444634c91f67e314e4c31fbf79b3af20bae436b5cddddbf83/diff:/var/lib/docker/overlay2/f9c4b034315c85af252fa61e528fb4305c5a666a1251bd5e9c0e237a869b4abd/diff:/var/lib/docker/overlay2/c5e5c4c66df1a7ddb9c86cad1c9aa940caa82844ad0e35cb02276aa0ba6d0b7f/diff:/var/lib/docker/overlay2/aaf58f33ac931eb54be4cd1919570ca1af95733dcab4b9c2e10c41131e77db49/diff:/var/lib/docker/overlay2/a7b40564a575c87ea262f224ef13fc6481638ba1a63ed7240fb4bf8926a6ed85/diff:/var/lib/docker/overlay2/c4cb52a2953465db8efcff407454278e0296ab6baa055d94c7e883851c5cf217/diff:/var/lib/docker/overlay2/c81d13d116441cca9b5166ae9674e23797741062dadc263c582f2367b998983a/diff:/var/lib/docker/overlay2/4c5c0ba04aaacf397dbaee3fe647d47794f60c155c62c2ab195d5354b2205f48/diff:/var/lib/docker/overlay2/550368a0b173c5abbc3d55283b835ff0133b2b59a7c034331f8df09665618a27/diff:/var/lib/docker/overlay2/76b7a7ae6ccf1e1f3f7dc7eb334096c3b8033b536b6a9259fb5030a4440
04eba/diff:/var/lib/docker/overlay2/d2b1f30f2546d9b7f03bc4e219d94725d78d8c982c20f8f0bafdbfda5c1ae8bf/diff:/var/lib/docker/overlay2/85233a1e3baa32dd2921cf7bedeba0cf4b3da93a30b961e0087d1bdc1a4c1bc3/diff:/var/lib/docker/overlay2/cb8ef3b71a3380e31859eddfd48bb662ea2b36c8c7e7da11114980e0ba7da149/diff:/var/lib/docker/overlay2/b7f9ef634f5e76c7886184f9b63923a29d2e0b320850bfd88d27c3169035e9f9/diff:/var/lib/docker/overlay2/857989e997e2c25ce2f08f05b398edd7de66bb41d9ab58299a913342d0666fe1/diff:/var/lib/docker/overlay2/be6259f9536637946901e9ae95da97a3546ff13afb5e7c6c53122c510189a587/diff:/var/lib/docker/overlay2/9af98ef1175ee709936e3f747a75f96127fef4beb6de6ded048be349cd2beeae/diff:/var/lib/docker/overlay2/380334ebb180d7dbdf9f3daf712f6550a5275eba9cffdac50822a52a18ae9d20/diff:/var/lib/docker/overlay2/4781a5f04440c58de00439020d596e31349356aa3a7df83e3dccee9d11c40a37/diff:/var/lib/docker/overlay2/b50427ad8910da25774c4a723d44972dec4cb53ea795140cda716993f11f1fbe/diff:/var/lib/docker/overlay2/314c762565a5db119530283731d2d311f49334
fd16d0fe1ca399b41cae25e54e/diff:/var/lib/docker/overlay2/7d010f8f6973e80c495d5a95dc1bbe689146ea762229f6697047402ca4f12c0e/diff:/var/lib/docker/overlay2/b586bd70ddd07df9686091a4425ae3d15c8b5879df73e7751a868397558650b9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534/merged",
	                "UpperDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534/diff",
	                "WorkDir": "/var/lib/docker/overlay2/74c22ead183e5550a8305f10860d3bca44643619148339b5094dc0d728401534/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-102159",
	                "Source": "/var/lib/docker/volumes/functional-102159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-102159",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-102159",
	                "name.minikube.sigs.k8s.io": "functional-102159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7086e379e867baa9b85cd24d4250a4458f5554f3849396c509ff6ee157d727eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62389"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62390"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62391"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62392"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62393"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7086e379e867",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-102159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0013e6d15176",
	                        "functional-102159"
	                    ],
	                    "NetworkID": "5dc832c33b0655d4aa36f8e4707672dba42fa1fc4e952757987578d3cf3b4030",
	                    "EndpointID": "282a8a85b11c4742f8b3b26abfc1524b4bea446b0fd48533127346f538bc1544",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-102159 -n functional-102159
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-102159 -n functional-102159: (1.5368487s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 logs -n 25
E0114 11:02:05.201448    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 logs -n 25: (3.4435296s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh            | functional-102159 ssh sudo cat                                         | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | /usr/share/ca-certificates/9968.pem                                    |                   |                   |         |                     |                     |
	| ssh            | functional-102159 ssh sudo cat                                         | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | /etc/ssl/certs/51391683.0                                              |                   |                   |         |                     |                     |
	| ssh            | functional-102159 ssh sudo cat                                         | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | /etc/ssl/certs/99682.pem                                               |                   |                   |         |                     |                     |
	| ssh            | functional-102159 ssh sudo cat                                         | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | /usr/share/ca-certificates/99682.pem                                   |                   |                   |         |                     |                     |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	| ssh            | functional-102159 ssh sudo cat                                         | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                   |                   |         |                     |                     |
	| image          | functional-102159 image save                                           | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-102159               |                   |                   |         |                     |                     |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	| docker-env     | functional-102159 docker-env                                           | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	| image          | functional-102159 image rm                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-102159               |                   |                   |         |                     |                     |
	| docker-env     | functional-102159 docker-env                                           | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	| image          | functional-102159 image load                                           | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar |                   |                   |         |                     |                     |
	| ssh            | functional-102159 ssh sudo cat                                         | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | /etc/test/nested/copy/9968/hosts                                       |                   |                   |         |                     |                     |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	| image          | functional-102159 image save --daemon                                  | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-102159               |                   |                   |         |                     |                     |
	| update-context | functional-102159                                                      | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:27 GMT | 14 Jan 23 10:27 GMT |
	|                | update-context                                                         |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |                   |         |                     |                     |
	| update-context | functional-102159                                                      | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	|                | update-context                                                         |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |                   |         |                     |                     |
	| update-context | functional-102159                                                      | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	|                | update-context                                                         |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |                   |         |                     |                     |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	|                | --format table                                                         |                   |                   |         |                     |                     |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	|                | --format short                                                         |                   |                   |         |                     |                     |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	|                | --format yaml                                                          |                   |                   |         |                     |                     |
	| ssh            | functional-102159 ssh pgrep                                            | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT |                     |
	|                | buildkitd                                                              |                   |                   |         |                     |                     |
	| image          | functional-102159 image build -t                                       | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	|                | localhost/my-image:functional-102159                                   |                   |                   |         |                     |                     |
	|                | testdata\build                                                         |                   |                   |         |                     |                     |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	| image          | functional-102159 image ls                                             | functional-102159 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:28 GMT | 14 Jan 23 10:28 GMT |
	|                | --format json                                                          |                   |                   |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:26:13
	Running on machine: minikube2
	Binary: Built with gc go1.19.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:26:12.989189    6696 out.go:296] Setting OutFile to fd 1004 ...
	I0114 10:26:13.071023    6696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:26:13.071023    6696 out.go:309] Setting ErrFile to fd 768...
	I0114 10:26:13.071023    6696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:26:13.090024    6696 out.go:303] Setting JSON to false
	I0114 10:26:13.093023    6696 start.go:125] hostinfo: {"hostname":"minikube2","uptime":3584,"bootTime":1673688389,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0114 10:26:13.094020    6696 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 10:26:13.098027    6696 out.go:177] * [functional-102159] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	I0114 10:26:13.102042    6696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 10:26:13.102042    6696 notify.go:220] Checking for updates...
	I0114 10:26:13.107020    6696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0114 10:26:13.110036    6696 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:26:13.119017    6696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:26:13.123023    6696 config.go:180] Loaded profile config "functional-102159": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:26:13.124026    6696 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:26:13.452022    6696 docker.go:138] docker version: linux-20.10.21
	I0114 10:26:13.461029    6696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:26:14.192106    6696 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2023-01-14 10:26:13.6571601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:26:14.206736    6696 out.go:177] * Using the docker driver based on existing profile
	I0114 10:26:14.209791    6696 start.go:294] selected driver: docker
	I0114 10:26:14.209791    6696 start.go:838] validating driver "docker" against &{Name:functional-102159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-102159 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:26:14.210359    6696 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:26:14.233560    6696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:26:14.890863    6696 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2023-01-14 10:26:14.391654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:26:14.947799    6696 cni.go:95] Creating CNI manager for ""
	I0114 10:26:14.947799    6696 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 10:26:14.947799    6696 start_flags.go:319] config:
	{Name:functional-102159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-102159 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false sto
rage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:26:14.951494    6696 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-14 10:22:38 UTC, end at Sat 2023-01-14 11:02:04 UTC. --
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.188433700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 406ae7e1542dc573185d0fc18fa6d59d30fa990d6c19a92528a72978ec171c50 9e9cf1bf2b70299fb2844ae8d73c9df55985a00c6ffd8322cdf4e7f2201576c1], retrying...."
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.400923000Z" level=info msg="Removing stale sandbox 48fa11f5373d28abf056f3088c7d66693324d929b7e1ae155660fab913de7932 (86fb4aa5cb191be8f92154587714113827558328ec119e493813962887543447)"
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.408667300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 38b6769732b9eb35fe8627f4afe12eaf1585c21048207cda3f6313022b5a9dd8 436b52e1f8d1f5eb64cef9000b2bf3dba28b046bead179819a8c6d2e21da97df], retrying...."
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.501604500Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.610251500Z" level=info msg="Loading containers: done."
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.683396100Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.683557700Z" level=info msg="Daemon has completed initialization"
	Jan 14 10:25:17 functional-102159 systemd[1]: Started Docker Application Container Engine.
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.735825000Z" level=info msg="API listen on [::]:2376"
	Jan 14 10:25:17 functional-102159 dockerd[8509]: time="2023-01-14T10:25:17.751020900Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 14 10:25:18 functional-102159 dockerd[8509]: time="2023-01-14T10:25:18.099900100Z" level=error msg="Failed to compute size of container rootfs cf7dfe43e73cf169850b52c4c6ea070bcfe118ee3cb98b7da10067e5186c3de0: mount does not exist"
	Jan 14 10:25:18 functional-102159 dockerd[8509]: time="2023-01-14T10:25:18.207068700Z" level=error msg="Failed to compute size of container rootfs dea50957430ba664cf8891d7d9acec84ade810c476ea0072f1965b5abe699612: mount does not exist"
	Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.309974100Z" level=info msg="ignoring event" container=4798303d77b584f6a204d194ddf0d4190b7761a8e123edc43076c7e65e2dffff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.398462800Z" level=info msg="ignoring event" container=50bfd183fcb7b51abb9f4e0678d33f4570f388e619be6cbe6e90a9ae17f4f8a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.398523500Z" level=info msg="ignoring event" container=0354baff29b5d47625c000016cb90034f6814d353df74a75798b139be72d5aa4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.505885000Z" level=info msg="ignoring event" container=efa78421167e425954ce6b9f859c0b64273c31e5a550fe71c7b01181795849f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.505965500Z" level=info msg="ignoring event" container=ef1da7f1f0c7d1a635bddd3a7f98811b5b47e0e0a9d797cf61c29382b88a0eb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.506002400Z" level=info msg="ignoring event" container=71a71b9bf4584e969bbdac5f2ba18bd91477d21ca8f5a6897a36b005ee9a1261 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:24 functional-102159 dockerd[8509]: time="2023-01-14T10:25:24.506023400Z" level=info msg="ignoring event" container=01aa7f5164cd28eeb5ff68f0de296fb00e2efaf8fdf04e3b0eaf65500172220a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:26 functional-102159 dockerd[8509]: time="2023-01-14T10:25:26.280522000Z" level=info msg="ignoring event" container=df005816fd7c743b9a1e88f82564d0c993c681188c62101090955d5f404bd475 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:25:40 functional-102159 dockerd[8509]: time="2023-01-14T10:25:40.920535900Z" level=info msg="ignoring event" container=d01dce30be6634fda3259f769b014189a59c9dccade3aa73e1ec87afda159f30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:27:06 functional-102159 dockerd[8509]: time="2023-01-14T10:27:06.298124800Z" level=info msg="ignoring event" container=ba990b9f959a302e47b3c89a85b515c85be33c58b9262db28ed033f600d3db8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:27:07 functional-102159 dockerd[8509]: time="2023-01-14T10:27:07.199036500Z" level=info msg="ignoring event" container=47c5551587d5d8038cfe55a9eed971866370d05f84e5438dc4f9da97623cb204 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:28:16 functional-102159 dockerd[8509]: time="2023-01-14T10:28:16.307402400Z" level=info msg="ignoring event" container=afa9aaf7d7fdfb6820c25b2882344897e3ed6ae3c01c3c964780f8c32aaa36e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:28:17 functional-102159 dockerd[8509]: time="2023-01-14T10:28:17.730745600Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	0338cc1a4b98d       mysql@sha256:6306f106a056e24b3a2582a59a4c84cd199907f826eff27df36406f227cd9a7d                   33 minutes ago      Running             mysql                     0                   eda543c2273b1
	a7830dcb341e1       nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e                   34 minutes ago      Running             myfrontend                0                   900235733b143
	d5073e9565a3d       82e4c8a736a4f                                                                                   35 minutes ago      Running             echoserver                0                   0cfba3becc101
	14697928ea507       nginx@sha256:659610aadb34b7967dea7686926fdcf08d588a71c5121edb094ce0e4cdbc45e6                   35 minutes ago      Running             nginx                     0                   faea8ad195ef0
	126fb125f52e4       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   35 minutes ago      Running             echoserver                0                   2d062826068d2
	44589eb98582b       6e38f40d628db                                                                                   36 minutes ago      Running             storage-provisioner       4                   59aa08e330a75
	df5aed183a9f7       5185b96f0becf                                                                                   36 minutes ago      Running             coredns                   3                   1ff7efe5ca6cf
	6f8558a3b16a7       0346dbd74bcb9                                                                                   36 minutes ago      Running             kube-apiserver            0                   d5f80083e8cac
	e2a0f337c0b52       6039992312758                                                                                   36 minutes ago      Running             kube-controller-manager   3                   c0e77e6470c9f
	775ae3ed4b9d9       a8a176a5d5d69                                                                                   36 minutes ago      Running             etcd                      3                   46d5c84a5f5b1
	d415ff2a68b71       6d23ec0e8b87e                                                                                   36 minutes ago      Running             kube-scheduler            3                   cc8eb135e4c65
	4ca61b0fe8ea6       beaaf00edd38a                                                                                   36 minutes ago      Running             kube-proxy                3                   c79d232c187a6
	a6a24bfc13562       6e38f40d628db                                                                                   37 minutes ago      Exited              storage-provisioner       3                   214fa47545901
	3570b5740a849       5185b96f0becf                                                                                   37 minutes ago      Exited              coredns                   2                   47d4814b57604
	5623e194917fb       6039992312758                                                                                   37 minutes ago      Exited              kube-controller-manager   2                   952ac0f27f986
	de11e4aa3fdd2       a8a176a5d5d69                                                                                   37 minutes ago      Exited              etcd                      2                   70411bd9f595d
	90a933a59d269       beaaf00edd38a                                                                                   37 minutes ago      Exited              kube-proxy                2                   d0d5611bfd093
	e5277841b152d       6d23ec0e8b87e                                                                                   37 minutes ago      Exited              kube-scheduler            2                   07813b99c43a1
	
	* 
	* ==> coredns [3570b5740a84] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [df5aed183a9f] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               functional-102159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-102159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=functional-102159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T10_23_15_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:23:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-102159
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 11:01:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:59:44 +0000   Sat, 14 Jan 2023 10:23:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:59:44 +0000   Sat, 14 Jan 2023 10:23:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:59:44 +0000   Sat, 14 Jan 2023 10:23:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:59:44 +0000   Sat, 14 Jan 2023 10:23:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-102159
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    abbf2dbe-7291-44a4-8406-1487b6f3b20a
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5fcdfb5cc4-m9lg9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36m
	  default                     hello-node-connect-6458c8fb6f-5bzgt          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     mysql-596b7fcdbf-r5qcr                       600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     34m
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  kube-system                 coredns-565d847f94-b8m5m                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-102159                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-102159             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36m
	  kube-system                 kube-controller-manager-functional-102159    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-82zd2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-102159             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38m                kube-proxy       
	  Normal  Starting                 36m                kube-proxy       
	  Normal  Starting                 37m                kube-proxy       
	  Normal  NodeHasSufficientMemory  39m (x7 over 39m)  kubelet          Node functional-102159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m (x6 over 39m)  kubelet          Node functional-102159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m (x6 over 39m)  kubelet          Node functional-102159 status is now: NodeHasSufficientPID
	  Normal  Starting                 38m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38m                kubelet          Node functional-102159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38m                kubelet          Node functional-102159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet          Node functional-102159 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                38m                kubelet          Node functional-102159 status is now: NodeReady
	  Normal  RegisteredNode           38m                node-controller  Node functional-102159 event: Registered Node functional-102159 in Controller
	  Normal  RegisteredNode           37m                node-controller  Node functional-102159 event: Registered Node functional-102159 in Controller
	  Normal  Starting                 36m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36m (x8 over 36m)  kubelet          Node functional-102159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36m (x8 over 36m)  kubelet          Node functional-102159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36m (x7 over 36m)  kubelet          Node functional-102159 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           36m                node-controller  Node functional-102159 event: Registered Node functional-102159 in Controller
	
	* 
	* ==> dmesg <==
	* [Jan14 10:37] WSL2: Performing memory compaction.
	[Jan14 10:38] WSL2: Performing memory compaction.
	[Jan14 10:39] WSL2: Performing memory compaction.
	[Jan14 10:40] WSL2: Performing memory compaction.
	[Jan14 10:41] WSL2: Performing memory compaction.
	[Jan14 10:42] WSL2: Performing memory compaction.
	[Jan14 10:43] WSL2: Performing memory compaction.
	[Jan14 10:44] WSL2: Performing memory compaction.
	[Jan14 10:45] WSL2: Performing memory compaction.
	[Jan14 10:46] WSL2: Performing memory compaction.
	[Jan14 10:47] WSL2: Performing memory compaction.
	[Jan14 10:48] WSL2: Performing memory compaction.
	[Jan14 10:49] WSL2: Performing memory compaction.
	[Jan14 10:50] WSL2: Performing memory compaction.
	[Jan14 10:51] WSL2: Performing memory compaction.
	[Jan14 10:52] WSL2: Performing memory compaction.
	[Jan14 10:53] WSL2: Performing memory compaction.
	[Jan14 10:54] WSL2: Performing memory compaction.
	[Jan14 10:55] WSL2: Performing memory compaction.
	[Jan14 10:56] WSL2: Performing memory compaction.
	[Jan14 10:57] WSL2: Performing memory compaction.
	[Jan14 10:58] WSL2: Performing memory compaction.
	[Jan14 10:59] WSL2: Performing memory compaction.
	[Jan14 11:00] WSL2: Performing memory compaction.
	[Jan14 11:01] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [775ae3ed4b9d] <==
	* {"level":"info","ts":"2023-01-14T10:28:52.237Z","caller":"traceutil/trace.go:171","msg":"trace[367315292] linearizableReadLoop","detail":"{readStateIndex:962; appliedIndex:962; }","duration":"955.6113ms","start":"2023-01-14T10:28:51.281Z","end":"2023-01-14T10:28:52.237Z","steps":["trace[367315292] 'read index received'  (duration: 955.6015ms)","trace[367315292] 'applied index is now lower than readState.Index'  (duration: 6.6µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-14T10:28:52.237Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"820.9913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13520"}
	{"level":"warn","ts":"2023-01-14T10:28:52.237Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"956.0255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[123146674] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:871; }","duration":"821.2737ms","start":"2023-01-14T10:28:51.416Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[123146674] 'agreement among raft nodes before linearized reading'  (duration: 820.9051ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[967262301] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:871; }","duration":"956.1071ms","start":"2023-01-14T10:28:51.281Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[967262301] 'agreement among raft nodes before linearized reading'  (duration: 955.9817ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:28:51.416Z","time spent":"821.3683ms","remote":"127.0.0.1:46316","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":13544,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.4476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[2073246068] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:871; }","duration":"121.4898ms","start":"2023-01-14T10:28:52.116Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[2073246068] 'agreement among raft nodes before linearized reading'  (duration: 121.402ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"697.5543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-14T10:28:52.238Z","caller":"traceutil/trace.go:171","msg":"trace[419299038] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"697.6117ms","start":"2023-01-14T10:28:51.540Z","end":"2023-01-14T10:28:52.238Z","steps":["trace[419299038] 'agreement among raft nodes before linearized reading'  (duration: 697.5111ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:28:51.540Z","time spent":"697.6939ms","remote":"127.0.0.1:46328","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-01-14T10:28:52.238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:28:51.281Z","time spent":"956.1915ms","remote":"127.0.0.1:46376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	{"level":"info","ts":"2023-01-14T10:35:34.937Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":953}
	{"level":"info","ts":"2023-01-14T10:35:34.939Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":953,"took":"1.3255ms"}
	{"level":"info","ts":"2023-01-14T10:40:34.972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1164}
	{"level":"info","ts":"2023-01-14T10:40:34.973Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1164,"took":"540.3µs"}
	{"level":"info","ts":"2023-01-14T10:44:12.535Z","caller":"traceutil/trace.go:171","msg":"trace[944493598] transaction","detail":"{read_only:false; response_revision:1525; number_of_response:1; }","duration":"119.7332ms","start":"2023-01-14T10:44:12.415Z","end":"2023-01-14T10:44:12.535Z","steps":["trace[944493598] 'process raft request'  (duration: 94.5731ms)","trace[944493598] 'compare'  (duration: 24.7699ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-14T10:45:34.989Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1374}
	{"level":"info","ts":"2023-01-14T10:45:34.990Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1374,"took":"1.1676ms"}
	{"level":"info","ts":"2023-01-14T10:50:35.013Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1585}
	{"level":"info","ts":"2023-01-14T10:50:35.014Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1585,"took":"1.0628ms"}
	{"level":"info","ts":"2023-01-14T10:55:35.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1795}
	{"level":"info","ts":"2023-01-14T10:55:35.036Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1795,"took":"693.8µs"}
	{"level":"info","ts":"2023-01-14T11:00:35.054Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2005}
	{"level":"info","ts":"2023-01-14T11:00:35.056Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2005,"took":"736.1µs"}
	
	* 
	* ==> etcd [de11e4aa3fdd] <==
	* {"level":"info","ts":"2023-01-14T10:24:12.609Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:24:12.610Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:24:12.610Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:24:12.617Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-01-14T10:24:12.617Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2023-01-14T10:24:20.699Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.58ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-14T10:24:20.700Z","caller":"traceutil/trace.go:171","msg":"trace[1620044353] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:417; }","duration":"104.7522ms","start":"2023-01-14T10:24:20.595Z","end":"2023-01-14T10:24:20.700Z","steps":["trace[1620044353] 'range keys from in-memory index tree'  (duration: 104.397ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:24:20.700Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.5001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-01-14T10:24:20.700Z","caller":"traceutil/trace.go:171","msg":"trace[190619426] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:417; }","duration":"104.5681ms","start":"2023-01-14T10:24:20.595Z","end":"2023-01-14T10:24:20.700Z","steps":["trace[190619426] 'range keys from in-memory index tree'  (duration: 104.3749ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:24:20.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.6737ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128018417634151038 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-functional-102159.173a25d95b8cab6c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-functional-102159.173a25d95b8cab6c\" value_size:714 lease:8128018417634151025 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-01-14T10:24:20.702Z","caller":"traceutil/trace.go:171","msg":"trace[1394572518] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"104.6355ms","start":"2023-01-14T10:24:20.598Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[1394572518] 'process raft request'  (duration: 104.5468ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-14T10:24:20.702Z","caller":"traceutil/trace.go:171","msg":"trace[1625638255] linearizableReadLoop","detail":"{readStateIndex:440; appliedIndex:439; }","duration":"105.9784ms","start":"2023-01-14T10:24:20.596Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[1625638255] 'read index received'  (duration: 99.1254ms)","trace[1625638255] 'applied index is now lower than readState.Index'  (duration: 6.8478ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-14T10:24:20.702Z","caller":"traceutil/trace.go:171","msg":"trace[3976706] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"106.4072ms","start":"2023-01-14T10:24:20.596Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[3976706] 'compare'  (duration: 102.7602ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:24:20.702Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.3222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-public\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-01-14T10:24:20.703Z","caller":"traceutil/trace.go:171","msg":"trace[2012943458] range","detail":"{range_begin:/registry/namespaces/kube-public; range_end:; response_count:1; response_revision:419; }","duration":"106.4567ms","start":"2023-01-14T10:24:20.596Z","end":"2023-01-14T10:24:20.702Z","steps":["trace[2012943458] 'agreement among raft nodes before linearized reading'  (duration: 106.2969ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:24:20.708Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.1778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" ","response":"range_response_count:2 size:1908"}
	{"level":"info","ts":"2023-01-14T10:24:20.708Z","caller":"traceutil/trace.go:171","msg":"trace[1930976732] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:419; }","duration":"111.3752ms","start":"2023-01-14T10:24:20.597Z","end":"2023-01-14T10:24:20.708Z","steps":["trace[1930976732] 'agreement among raft nodes before linearized reading'  (duration: 111.1419ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-14T10:25:04.994Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-14T10:25:04.995Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-102159","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2023/01/14 10:25:04 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2023/01/14 10:25:05 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2023-01-14T10:25:05.095Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-01-14T10:25:05.107Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-01-14T10:25:05.109Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-01-14T10:25:05.109Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-102159","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  11:02:04 up 56 min,  0 users,  load average: 0.39, 0.47, 0.56
	Linux functional-102159 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [6f8558a3b16a] <==
	* I0114 10:26:02.506569       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0114 10:26:02.802701       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.103.134.250]
	I0114 10:26:02.915129       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0114 10:26:10.222011       1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.101.250.152]
	I0114 10:26:24.028970       1 trace.go:205] Trace[1417863744]: "Get" url:/api/v1/namespaces/default/persistentvolumeclaims/myclaim,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:125fd9ef-a25e-4a8d-93a4-4362e1c676e4,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (14-Jan-2023 10:26:23.439) (total time: 589ms):
	Trace[1417863744]: ---"About to write a response" 589ms (10:26:24.028)
	Trace[1417863744]: [589.2883ms] [589.2883ms] END
	I0114 10:26:42.513526       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.111.220.79]
	I0114 10:27:40.224740       1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.111.101.114]
	I0114 10:28:13.696602       1 trace.go:205] Trace[189169]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.49.2,type:*v1.Endpoints (14-Jan-2023 10:28:12.198) (total time: 1498ms):
	Trace[189169]: ---"Txn call finished" err:<nil> 1493ms (10:28:13.696)
	Trace[189169]: [1.4980267s] [1.4980267s] END
	I0114 10:28:13.697507       1 trace.go:205] Trace[1760240503]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:8448ab6b-6d05-4550-beab-2c294c8e0291,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (14-Jan-2023 10:28:12.913) (total time: 783ms):
	Trace[1760240503]: ---"About to write a response" 783ms (10:28:13.697)
	Trace[1760240503]: [783.5553ms] [783.5553ms] END
	I0114 10:28:13.698861       1 trace.go:205] Trace[1912017713]: "List(recursive=true) etcd3" audit-id:e85cf58c-64c7-476c-a742-a387cf0f41e8,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Jan-2023 10:28:12.408) (total time: 1289ms):
	Trace[1912017713]: [1.2899448s] [1.2899448s] END
	I0114 10:28:13.699907       1 trace.go:205] Trace[437431457]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:e85cf58c-64c7-476c-a742-a387cf0f41e8,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Jan-2023 10:28:12.408) (total time: 1291ms):
	Trace[437431457]: ---"Listing from storage done" 1290ms (10:28:13.698)
	Trace[437431457]: [1.2910478s] [1.2910478s] END
	I0114 10:28:52.239713       1 trace.go:205] Trace[1047385238]: "List(recursive=true) etcd3" audit-id:06199082-3399-4c8e-a4aa-8002fc869ffa,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (14-Jan-2023 10:28:51.415) (total time: 824ms):
	Trace[1047385238]: [824.2437ms] [824.2437ms] END
	I0114 10:28:52.240419       1 trace.go:205] Trace[1071649882]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:06199082-3399-4c8e-a4aa-8002fc869ffa,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (14-Jan-2023 10:28:51.415) (total time: 825ms):
	Trace[1071649882]: ---"Listing from storage done" 824ms (10:28:52.239)
	Trace[1071649882]: [825.0645ms] [825.0645ms] END
	
	* 
	* ==> kube-controller-manager [5623e194917f] <==
	* I0114 10:24:34.600319       1 shared_informer.go:262] Caches are synced for disruption
	I0114 10:24:34.601811       1 shared_informer.go:262] Caches are synced for PVC protection
	I0114 10:24:34.602550       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0114 10:24:34.604060       1 shared_informer.go:262] Caches are synced for expand
	I0114 10:24:34.604162       1 shared_informer.go:262] Caches are synced for service account
	I0114 10:24:34.694518       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0114 10:24:34.694670       1 shared_informer.go:262] Caches are synced for stateful set
	I0114 10:24:34.694722       1 shared_informer.go:262] Caches are synced for job
	I0114 10:24:34.694750       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0114 10:24:34.695401       1 shared_informer.go:262] Caches are synced for GC
	I0114 10:24:34.702045       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0114 10:24:34.708159       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0114 10:24:34.795118       1 shared_informer.go:262] Caches are synced for taint
	I0114 10:24:34.795138       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:24:34.795206       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0114 10:24:34.795242       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0114 10:24:34.795320       1 taint_manager.go:209] "Sending events to api server"
	I0114 10:24:34.795515       1 event.go:294] "Event occurred" object="functional-102159" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-102159 event: Registered Node functional-102159 in Controller"
	W0114 10:24:34.795329       1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-102159. Assuming now as a timestamp.
	I0114 10:24:34.795627       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0114 10:24:34.795209       1 shared_informer.go:262] Caches are synced for daemon sets
	I0114 10:24:34.795434       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:24:35.025643       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:24:35.025750       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:24:35.109009       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [e2a0f337c0b5] <==
	* I0114 10:25:54.895552       1 shared_informer.go:262] Caches are synced for namespace
	I0114 10:25:54.898218       1 shared_informer.go:262] Caches are synced for taint
	I0114 10:25:54.898469       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0114 10:25:54.898532       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	W0114 10:25:54.898568       1 node_lifecycle_controller.go:1058] Missing timestamp for Node functional-102159. Assuming now as a timestamp.
	I0114 10:25:54.898597       1 taint_manager.go:209] "Sending events to api server"
	I0114 10:25:54.898630       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0114 10:25:54.898969       1 event.go:294] "Event occurred" object="functional-102159" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-102159 event: Registered Node functional-102159 in Controller"
	I0114 10:25:54.899657       1 shared_informer.go:262] Caches are synced for expand
	I0114 10:25:54.904720       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:25:54.904856       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:25:54.917344       1 shared_informer.go:262] Caches are synced for PV protection
	I0114 10:25:54.921720       1 shared_informer.go:262] Caches are synced for persistent volume
	I0114 10:25:54.997577       1 shared_informer.go:262] Caches are synced for attach detach
	I0114 10:25:55.313016       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:25:55.313083       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:25:55.313273       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:26:02.511251       1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-5fcdfb5cc4 to 1"
	I0114 10:26:02.597778       1 event.go:294] "Event occurred" object="default/hello-node-5fcdfb5cc4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-5fcdfb5cc4-m9lg9"
	I0114 10:26:21.398138       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0114 10:26:21.398303       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0114 10:26:41.996850       1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-6458c8fb6f to 1"
	I0114 10:26:42.021680       1 event.go:294] "Event occurred" object="default/hello-node-connect-6458c8fb6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-6458c8fb6f-5bzgt"
	I0114 10:27:40.324409       1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-596b7fcdbf to 1"
	I0114 10:27:40.429763       1 event.go:294] "Event occurred" object="default/mysql-596b7fcdbf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-596b7fcdbf-r5qcr"
	
	* 
	* ==> kube-proxy [4ca61b0fe8ea] <==
	* I0114 10:25:25.708632       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0114 10:25:25.711638       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0114 10:25:27.296379       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:38432->192.168.49.2:8441: read: connection reset by peer
	E0114 10:25:28.469334       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused
	E0114 10:25:30.612361       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused
	I0114 10:25:39.710342       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0114 10:25:39.710422       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0114 10:25:39.710496       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:25:39.931333       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:25:39.931484       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:25:39.931498       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:25:39.931516       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:25:39.931550       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:25:39.931955       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:25:39.932895       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:25:39.933002       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:25:39.934119       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:25:39.934424       1 config.go:317] "Starting service config controller"
	I0114 10:25:39.934429       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:25:39.934193       1 config.go:444] "Starting node config controller"
	I0114 10:25:39.934494       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:25:39.934478       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:25:40.095377       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:25:40.095550       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:25:40.095580       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [90a933a59d26] <==
	* I0114 10:24:11.199188       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0114 10:24:11.203035       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0114 10:24:11.206786       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0114 10:24:11.295659       1 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0114 10:24:11.301318       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-102159": dial tcp 192.168.49.2:8441: connect: connection refused
	I0114 10:24:20.498097       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0114 10:24:20.498257       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0114 10:24:20.498298       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:24:20.802191       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:24:20.802559       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:24:20.802582       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:24:20.802660       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:24:20.802753       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:24:20.803274       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:24:20.803938       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:24:20.804055       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:24:20.804968       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:24:20.805388       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:24:20.805037       1 config.go:444] "Starting node config controller"
	I0114 10:24:20.805417       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:24:20.805534       1 config.go:317] "Starting service config controller"
	I0114 10:24:20.805650       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:24:20.909848       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:24:20.910069       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:24:20.910192       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d415ff2a68b7] <==
	* I0114 10:25:35.707598       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:25:39.599464       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 10:25:39.599615       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:25:39.599645       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:25:39.599664       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:25:39.708709       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:25:39.708912       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:25:39.711663       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:25:39.711936       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:25:39.711705       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:25:39.711758       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:25:39.814090       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [e5277841b152] <==
	* I0114 10:24:13.208214       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:24:20.398862       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 10:24:20.398929       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:24:20.398949       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:24:20.398965       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:24:20.502211       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:24:20.502385       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:24:20.505158       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:24:20.505270       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:24:20.505304       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:24:20.599818       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:24:20.706374       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:25:04.904652       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0114 10:25:04.905093       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0114 10:25:04.905114       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0114 10:25:04.905626       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 10:22:38 UTC, end at Sat 2023-01-14 11:02:05 UTC. --
	Jan 14 10:26:42 functional-102159 kubelet[10330]: I0114 10:26:42.922888   10330 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0cfba3becc101bb395aed7f49535287442acfd305b87404974d2592b95df3a9f"
	Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.004742   10330 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/b8420616-b5fd-42c0-b7a9-c83030278ce5-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\") pod \"b8420616-b5fd-42c0-b7a9-c83030278ce5\" (UID: \"b8420616-b5fd-42c0-b7a9-c83030278ce5\") "
	Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.004934   10330 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8420616-b5fd-42c0-b7a9-c83030278ce5-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05" (OuterVolumeSpecName: "mypd") pod "b8420616-b5fd-42c0-b7a9-c83030278ce5" (UID: "b8420616-b5fd-42c0-b7a9-c83030278ce5"). InnerVolumeSpecName "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.005173   10330 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwxxl\" (UniqueName: \"kubernetes.io/projected/b8420616-b5fd-42c0-b7a9-c83030278ce5-kube-api-access-pwxxl\") pod \"b8420616-b5fd-42c0-b7a9-c83030278ce5\" (UID: \"b8420616-b5fd-42c0-b7a9-c83030278ce5\") "
	Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.005289   10330 reconciler.go:399] "Volume detached for volume \"pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\" (UniqueName: \"kubernetes.io/host-path/b8420616-b5fd-42c0-b7a9-c83030278ce5-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\") on node \"functional-102159\" DevicePath \"\""
	Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.008486   10330 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8420616-b5fd-42c0-b7a9-c83030278ce5-kube-api-access-pwxxl" (OuterVolumeSpecName: "kube-api-access-pwxxl") pod "b8420616-b5fd-42c0-b7a9-c83030278ce5" (UID: "b8420616-b5fd-42c0-b7a9-c83030278ce5"). InnerVolumeSpecName "kube-api-access-pwxxl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.106818   10330 reconciler.go:399] "Volume detached for volume \"kube-api-access-pwxxl\" (UniqueName: \"kubernetes.io/projected/b8420616-b5fd-42c0-b7a9-c83030278ce5-kube-api-access-pwxxl\") on node \"functional-102159\" DevicePath \"\""
	Jan 14 10:27:08 functional-102159 kubelet[10330]: I0114 10:27:08.920470   10330 scope.go:115] "RemoveContainer" containerID="ba990b9f959a302e47b3c89a85b515c85be33c58b9262db28ed033f600d3db8e"
	Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.433759   10330 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:27:09 functional-102159 kubelet[10330]: E0114 10:27:09.433969   10330 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="b8420616-b5fd-42c0-b7a9-c83030278ce5" containerName="myfrontend"
	Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.434114   10330 memory_manager.go:345] "RemoveStaleState removing state" podUID="b8420616-b5fd-42c0-b7a9-c83030278ce5" containerName="myfrontend"
	Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.519267   10330 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\" (UniqueName: \"kubernetes.io/host-path/836f5e25-6f77-4207-bf2f-01a2a8b4de80-pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05\") pod \"sp-pod\" (UID: \"836f5e25-6f77-4207-bf2f-01a2a8b4de80\") " pod="default/sp-pod"
	Jan 14 10:27:09 functional-102159 kubelet[10330]: I0114 10:27:09.519443   10330 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxc9m\" (UniqueName: \"kubernetes.io/projected/836f5e25-6f77-4207-bf2f-01a2a8b4de80-kube-api-access-gxc9m\") pod \"sp-pod\" (UID: \"836f5e25-6f77-4207-bf2f-01a2a8b4de80\") " pod="default/sp-pod"
	Jan 14 10:27:10 functional-102159 kubelet[10330]: I0114 10:27:10.720194   10330 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b8420616-b5fd-42c0-b7a9-c83030278ce5 path="/var/lib/kubelet/pods/b8420616-b5fd-42c0-b7a9-c83030278ce5/volumes"
	Jan 14 10:27:11 functional-102159 kubelet[10330]: I0114 10:27:11.029377   10330 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="900235733b1433ea55b63bae15b58951cdd983dd8b02e6ddf6274319cd0c8a46"
	Jan 14 10:27:40 functional-102159 kubelet[10330]: I0114 10:27:40.516852   10330 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:27:40 functional-102159 kubelet[10330]: I0114 10:27:40.716464   10330 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsb9\" (UniqueName: \"kubernetes.io/projected/49328d8a-bf22-466e-b722-c9d1061506d0-kube-api-access-8hsb9\") pod \"mysql-596b7fcdbf-r5qcr\" (UID: \"49328d8a-bf22-466e-b722-c9d1061506d0\") " pod="default/mysql-596b7fcdbf-r5qcr"
	Jan 14 10:27:43 functional-102159 kubelet[10330]: I0114 10:27:43.701968   10330 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="eda543c2273b19508cc70d4b476b5d7188032c446c65d1b4ab96497d71241676"
	Jan 14 10:30:31 functional-102159 kubelet[10330]: W0114 10:30:31.039513   10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jan 14 10:35:31 functional-102159 kubelet[10330]: W0114 10:35:31.040450   10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jan 14 10:40:31 functional-102159 kubelet[10330]: W0114 10:40:31.044406   10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jan 14 10:45:31 functional-102159 kubelet[10330]: W0114 10:45:31.047836   10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jan 14 10:50:31 functional-102159 kubelet[10330]: W0114 10:50:31.050558   10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jan 14 10:55:31 functional-102159 kubelet[10330]: W0114 10:55:31.049940   10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jan 14 11:00:31 functional-102159 kubelet[10330]: W0114 11:00:31.115669   10330 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [44589eb98582] <==
	* I0114 10:25:42.696348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:25:42.805823       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:25:42.805995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:26:00.228433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:26:00.228747       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cad4d3da-c442-4946-b691-f57647b16439", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-102159_a2cd9e9d-1e7b-4af5-bb82-1f8161b05f29 became leader
	I0114 10:26:00.228855       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-102159_a2cd9e9d-1e7b-4af5-bb82-1f8161b05f29!
	I0114 10:26:00.330745       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-102159_a2cd9e9d-1e7b-4af5-bb82-1f8161b05f29!
	I0114 10:26:21.398353       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0114 10:26:21.399060       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0114 10:26:21.398622       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    c38f9802-a8bf-4487-a640-1f377c5ca0db 372 0 2023-01-14 10:23:34 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-01-14 10:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05 676 0 2023-01-14 10:26:21 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-01-14 10:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2023-01-14 10:26:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0114 10:26:21.401499       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05" provisioned
	I0114 10:26:21.401630       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0114 10:26:21.401644       1 volume_store.go:212] Trying to save persistentvolume "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05"
	I0114 10:26:21.599192       1 volume_store.go:219] persistentvolume "pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05" saved
	I0114 10:26:21.599518       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a9d24d31-3c09-4e35-9e72-fa4e8ad6bb05
	
	* 
	* ==> storage-provisioner [a6a24bfc1356] <==
	* I0114 10:24:41.795629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:24:41.817898       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:24:41.818079       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:24:59.237792       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:24:59.238905       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cad4d3da-c442-4946-b691-f57647b16439", APIVersion:"v1", ResourceVersion:"545", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-102159_cf5e345c-831e-4677-8d72-387cb9eeb7b8 became leader
	I0114 10:24:59.239489       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-102159_cf5e345c-831e-4677-8d72-387cb9eeb7b8!
	I0114 10:24:59.339991       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-102159_cf5e345c-831e-4677-8d72-387cb9eeb7b8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-102159 -n functional-102159
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-102159 -n functional-102159: (1.624775s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-102159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-102159 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-102159 describe pod : exit status 1 (211.6647ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-102159 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2165.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (574.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-114511 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker
E0114 12:01:48.461948    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 12:01:56.944107    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:56.959171    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:56.974757    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:57.006238    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:57.053625    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:57.148312    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:57.321230    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:57.649351    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:58.298256    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:01:59.587653    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:02:02.161389    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:02:05.244858    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 12:02:07.285512    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:02:17.533295    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:02:38.019191    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:02:57.688250    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:57.703041    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:57.718753    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:57.750676    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:57.797832    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:57.892660    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:58.053528    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:58.382599    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:02:59.023142    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:03:00.308623    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-114511 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (9m33.2807873s)

                                                
                                                
-- stdout --
	* [cilium-114511] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node cilium-114511 in cluster cilium-114511
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 12:01:28.529153    7488 out.go:296] Setting OutFile to fd 1568 ...
	I0114 12:01:28.599877    7488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 12:01:28.599877    7488 out.go:309] Setting ErrFile to fd 1760...
	I0114 12:01:28.599877    7488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 12:01:28.619437    7488 out.go:303] Setting JSON to false
	I0114 12:01:28.622064    7488 start.go:125] hostinfo: {"hostname":"minikube2","uptime":9299,"bootTime":1673688389,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0114 12:01:28.622296    7488 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 12:01:28.629458    7488 out.go:177] * [cilium-114511] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	I0114 12:01:28.634820    7488 notify.go:220] Checking for updates...
	I0114 12:01:28.638529    7488 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 12:01:28.643540    7488 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0114 12:01:28.649418    7488 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 12:01:28.654660    7488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 12:01:28.658210    7488 config.go:180] Loaded profile config "auto-114507": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:01:28.659393    7488 config.go:180] Loaded profile config "embed-certs-115542": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:01:28.659746    7488 config.go:180] Loaded profile config "kindnet-114509": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:01:28.660139    7488 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 12:01:28.982451    7488 docker.go:138] docker version: linux-20.10.21
	I0114 12:01:28.990058    7488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 12:01:29.649520    7488 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:66 SystemTime:2023-01-14 12:01:29.1519838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 12:01:29.927935    7488 out.go:177] * Using the docker driver based on user configuration
	I0114 12:01:30.122116    7488 start.go:294] selected driver: docker
	I0114 12:01:30.122297    7488 start.go:838] validating driver "docker" against <nil>
	I0114 12:01:30.122391    7488 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 12:01:30.190343    7488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 12:01:30.808520    7488 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:66 SystemTime:2023-01-14 12:01:30.3380223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 12:01:30.809144    7488 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 12:01:30.810331    7488 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 12:01:30.844757    7488 out.go:177] * Using Docker Desktop driver with root privileges
	I0114 12:01:30.931433    7488 cni.go:95] Creating CNI manager for "cilium"
	I0114 12:01:30.931433    7488 start_flags.go:314] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0114 12:01:30.931433    7488 start_flags.go:319] config:
	{Name:cilium-114511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-114511 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 12:01:31.034595    7488 out.go:177] * Starting control plane node cilium-114511 in cluster cilium-114511
	I0114 12:01:31.124116    7488 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 12:01:31.160605    7488 out.go:177] * Pulling base image ...
	I0114 12:01:31.165909    7488 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 12:01:31.165909    7488 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 12:01:31.165909    7488 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 12:01:31.165909    7488 cache.go:57] Caching tarball of preloaded images
	I0114 12:01:31.166668    7488 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 12:01:31.166918    7488 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 12:01:31.167086    7488 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\config.json ...
	I0114 12:01:31.167086    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\config.json: {Name:mk87587c714bc3b2aa1fe3b584f029fb1502fd43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:01:31.390625    7488 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 12:01:31.390625    7488 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 12:01:31.390625    7488 cache.go:193] Successfully downloaded all kic artifacts
	I0114 12:01:31.390625    7488 start.go:364] acquiring machines lock for cilium-114511: {Name:mkbcfb3555d1094104122a087dbd8320c941634b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 12:01:31.390625    7488 start.go:368] acquired machines lock for "cilium-114511" in 0s
	I0114 12:01:31.390625    7488 start.go:93] Provisioning new machine with config: &{Name:cilium-114511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-114511 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 12:01:31.390625    7488 start.go:125] createHost starting for "" (driver="docker")
	I0114 12:01:31.396595    7488 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0114 12:01:31.396595    7488 start.go:159] libmachine.API.Create for "cilium-114511" (driver="docker")
	I0114 12:01:31.396595    7488 client.go:168] LocalClient.Create starting
	I0114 12:01:31.396595    7488 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0114 12:01:31.397583    7488 main.go:134] libmachine: Decoding PEM data...
	I0114 12:01:31.397583    7488 main.go:134] libmachine: Parsing certificate...
	I0114 12:01:31.397583    7488 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0114 12:01:31.397583    7488 main.go:134] libmachine: Decoding PEM data...
	I0114 12:01:31.397583    7488 main.go:134] libmachine: Parsing certificate...
	I0114 12:01:31.405586    7488 cli_runner.go:164] Run: docker network inspect cilium-114511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 12:01:31.595949    7488 cli_runner.go:211] docker network inspect cilium-114511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 12:01:31.605607    7488 network_create.go:280] running [docker network inspect cilium-114511] to gather additional debugging logs...
	I0114 12:01:31.605607    7488 cli_runner.go:164] Run: docker network inspect cilium-114511
	W0114 12:01:31.805133    7488 cli_runner.go:211] docker network inspect cilium-114511 returned with exit code 1
	I0114 12:01:31.805133    7488 network_create.go:283] error running [docker network inspect cilium-114511]: docker network inspect cilium-114511: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-114511
	I0114 12:01:31.805133    7488 network_create.go:285] output of [docker network inspect cilium-114511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-114511
	
	** /stderr **
	I0114 12:01:31.811721    7488 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 12:01:32.032155    7488 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80] misses:0}
	I0114 12:01:32.032155    7488 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:32.032155    7488 network_create.go:123] attempt to create docker network cilium-114511 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0114 12:01:32.041155    7488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511
	W0114 12:01:32.229874    7488 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511 returned with exit code 1
	W0114 12:01:32.229968    7488 network_create.go:115] failed to create docker network cilium-114511 192.168.49.0/24, will retry: subnet is taken
	I0114 12:01:32.249936    7488 network.go:268] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:false}} dirty:map[] misses:0}
	I0114 12:01:32.249936    7488 network.go:213] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:32.269980    7488 network.go:277] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80 192.168.58.0:0xc0005d0110] misses:0}
	I0114 12:01:32.269980    7488 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:32.269980    7488 network_create.go:123] attempt to create docker network cilium-114511 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0114 12:01:32.277984    7488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511
	W0114 12:01:32.507458    7488 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511 returned with exit code 1
	W0114 12:01:32.507707    7488 network_create.go:115] failed to create docker network cilium-114511 192.168.58.0/24, will retry: subnet is taken
	I0114 12:01:32.529174    7488 network.go:268] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80 192.168.58.0:0xc0005d0110] misses:1}
	I0114 12:01:32.529174    7488 network.go:213] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:32.548980    7488 network.go:277] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80 192.168.58.0:0xc0005d0110 192.168.67.0:0xc000ab2b18] misses:1}
	I0114 12:01:32.548980    7488 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:32.548980    7488 network_create.go:123] attempt to create docker network cilium-114511 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0114 12:01:32.556840    7488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511
	W0114 12:01:32.785256    7488 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511 returned with exit code 1
	W0114 12:01:32.785323    7488 network_create.go:115] failed to create docker network cilium-114511 192.168.67.0/24, will retry: subnet is taken
	I0114 12:01:32.805195    7488 network.go:268] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80 192.168.58.0:0xc0005d0110 192.168.67.0:0xc000ab2b18] misses:2}
	I0114 12:01:32.805195    7488 network.go:213] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:32.826868    7488 network.go:277] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80 192.168.58.0:0xc0005d0110 192.168.67.0:0xc000ab2b18 192.168.76.0:0xc0005d01e8] misses:2}
	I0114 12:01:32.827496    7488 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:32.827496    7488 network_create.go:123] attempt to create docker network cilium-114511 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0114 12:01:32.835994    7488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511
	W0114 12:01:33.049611    7488 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511 returned with exit code 1
	W0114 12:01:33.049611    7488 network_create.go:115] failed to create docker network cilium-114511 192.168.76.0/24, will retry: subnet is taken
	I0114 12:01:33.070296    7488 network.go:268] skipping subnet 192.168.76.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80 192.168.58.0:0xc0005d0110 192.168.67.0:0xc000ab2b18 192.168.76.0:0xc0005d01e8] misses:3}
	I0114 12:01:33.070296    7488 network.go:213] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:33.092293    7488 network.go:277] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ab2a80] amended:true}} dirty:map[192.168.49.0:0xc000ab2a80 192.168.58.0:0xc0005d0110 192.168.67.0:0xc000ab2b18 192.168.76.0:0xc0005d01e8 192.168.85.0:0xc0001acbe8] misses:3}
	I0114 12:01:33.093073    7488 network.go:210] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:01:33.093073    7488 network_create.go:123] attempt to create docker network cilium-114511 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0114 12:01:33.101698    7488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511
	W0114 12:01:33.314072    7488 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-114511 cilium-114511 returned with exit code 1
	W0114 12:01:33.314072    7488 network_create.go:115] failed to create docker network cilium-114511 192.168.85.0/24, will retry: subnet is taken
	W0114 12:01:33.316210    7488 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create docker network cilium-114511: subnet is taken
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create docker network cilium-114511: subnet is taken
	I0114 12:01:33.329064    7488 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 12:01:33.564819    7488 cli_runner.go:164] Run: docker volume create cilium-114511 --label name.minikube.sigs.k8s.io=cilium-114511 --label created_by.minikube.sigs.k8s.io=true
	I0114 12:01:33.773262    7488 oci.go:103] Successfully created a docker volume cilium-114511
	I0114 12:01:33.780626    7488 cli_runner.go:164] Run: docker run --rm --name cilium-114511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-114511 --entrypoint /usr/bin/test -v cilium-114511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 12:01:37.588733    7488 cli_runner.go:217] Completed: docker run --rm --name cilium-114511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-114511 --entrypoint /usr/bin/test -v cilium-114511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib: (3.8080675s)
	I0114 12:01:37.588733    7488 oci.go:107] Successfully prepared a docker volume cilium-114511
	I0114 12:01:37.588733    7488 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 12:01:37.588733    7488 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 12:01:37.595723    7488 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-114511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 12:02:04.145304    7488 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-114511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (26.5489964s)
	I0114 12:02:04.145356    7488 kic.go:199] duration metric: took 26.556347 seconds to extract preloaded images to volume
	I0114 12:02:04.158731    7488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 12:02:04.778607    7488 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2023-01-14 12:02:04.311126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 12:02:04.789217    7488 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 12:02:05.489782    7488 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-114511 --name cilium-114511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-114511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-114511 --volume cilium-114511:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 12:02:06.902443    7488 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-114511 --name cilium-114511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-114511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-114511 --volume cilium-114511:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c: (1.4125816s)
	I0114 12:02:06.911436    7488 cli_runner.go:164] Run: docker container inspect cilium-114511 --format={{.State.Running}}
	I0114 12:02:07.167048    7488 cli_runner.go:164] Run: docker container inspect cilium-114511 --format={{.State.Status}}
	I0114 12:02:07.421517    7488 cli_runner.go:164] Run: docker exec cilium-114511 stat /var/lib/dpkg/alternatives/iptables
	I0114 12:02:07.870326    7488 oci.go:144] the created container "cilium-114511" has a running status.
	I0114 12:02:07.870326    7488 kic.go:221] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa...
	I0114 12:02:08.093422    7488 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 12:02:08.464198    7488 cli_runner.go:164] Run: docker container inspect cilium-114511 --format={{.State.Status}}
	I0114 12:02:08.717469    7488 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 12:02:08.717469    7488 kic_runner.go:114] Args: [docker exec --privileged cilium-114511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 12:02:09.079715    7488 kic.go:261] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa...
	I0114 12:02:09.697527    7488 cli_runner.go:164] Run: docker container inspect cilium-114511 --format={{.State.Status}}
	I0114 12:02:09.917777    7488 machine.go:88] provisioning docker machine ...
	I0114 12:02:09.917777    7488 ubuntu.go:169] provisioning hostname "cilium-114511"
	I0114 12:02:09.924764    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:10.143768    7488 main.go:134] libmachine: Using SSH client type: native
	I0114 12:02:10.151757    7488 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50116 <nil> <nil>}
	I0114 12:02:10.151757    7488 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-114511 && echo "cilium-114511" | sudo tee /etc/hostname
	I0114 12:02:10.299771    7488 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-114511
	
	I0114 12:02:10.306792    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:10.509201    7488 main.go:134] libmachine: Using SSH client type: native
	I0114 12:02:10.510204    7488 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50116 <nil> <nil>}
	I0114 12:02:10.510204    7488 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-114511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-114511/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-114511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 12:02:10.694211    7488 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 12:02:10.694211    7488 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0114 12:02:10.694211    7488 ubuntu.go:177] setting up certificates
	I0114 12:02:10.694211    7488 provision.go:83] configureAuth start
	I0114 12:02:10.705207    7488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-114511
	I0114 12:02:10.932222    7488 provision.go:138] copyHostCerts
	I0114 12:02:10.932889    7488 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I0114 12:02:10.933065    7488 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I0114 12:02:10.933629    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0114 12:02:10.935136    7488 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I0114 12:02:10.935136    7488 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I0114 12:02:10.935483    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0114 12:02:10.936862    7488 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I0114 12:02:10.936862    7488 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I0114 12:02:10.937468    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I0114 12:02:10.938807    7488 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-114511 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-114511]
	I0114 12:02:11.083710    7488 provision.go:172] copyRemoteCerts
	I0114 12:02:11.094861    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 12:02:11.101619    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:11.329368    7488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50116 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa Username:docker}
	I0114 12:02:11.412516    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0114 12:02:11.466522    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 12:02:11.513527    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 12:02:11.581129    7488 provision.go:86] duration metric: configureAuth took 886.9087ms
	I0114 12:02:11.581248    7488 ubuntu.go:193] setting minikube options for container-runtime
	I0114 12:02:11.581386    7488 config.go:180] Loaded profile config "cilium-114511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:02:11.592010    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:11.807173    7488 main.go:134] libmachine: Using SSH client type: native
	I0114 12:02:11.808170    7488 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50116 <nil> <nil>}
	I0114 12:02:11.808170    7488 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 12:02:11.986828    7488 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 12:02:11.986828    7488 ubuntu.go:71] root file system type: overlay
	I0114 12:02:11.987820    7488 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 12:02:11.996821    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:12.218913    7488 main.go:134] libmachine: Using SSH client type: native
	I0114 12:02:12.218913    7488 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50116 <nil> <nil>}
	I0114 12:02:12.218913    7488 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 12:02:12.446858    7488 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 12:02:12.454857    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:12.676306    7488 main.go:134] libmachine: Using SSH client type: native
	I0114 12:02:12.676306    7488 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50116 <nil> <nil>}
	I0114 12:02:12.676306    7488 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 12:02:14.155886    7488 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 12:02:12.423937000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0114 12:02:14.155886    7488 machine.go:91] provisioned docker machine in 4.2380649s
	I0114 12:02:14.155886    7488 client.go:171] LocalClient.Create took 42.7588465s
	I0114 12:02:14.155886    7488 start.go:167] duration metric: libmachine.API.Create for "cilium-114511" took 42.7588465s
	I0114 12:02:14.155886    7488 start.go:300] post-start starting for "cilium-114511" (driver="docker")
	I0114 12:02:14.155886    7488 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 12:02:14.170804    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 12:02:14.177805    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:14.367318    7488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50116 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa Username:docker}
	I0114 12:02:14.530547    7488 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 12:02:14.540560    7488 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 12:02:14.540560    7488 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 12:02:14.540560    7488 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 12:02:14.540560    7488 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 12:02:14.540560    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0114 12:02:14.540560    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0114 12:02:14.541548    7488 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem -> 99682.pem in /etc/ssl/certs
	I0114 12:02:14.556555    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 12:02:14.580551    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem --> /etc/ssl/certs/99682.pem (1708 bytes)
	I0114 12:02:14.635910    7488 start.go:303] post-start completed in 480.0183ms
	I0114 12:02:14.648846    7488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-114511
	I0114 12:02:14.865627    7488 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\config.json ...
	I0114 12:02:14.888774    7488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 12:02:14.903975    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:15.100472    7488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50116 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa Username:docker}
	I0114 12:02:15.255252    7488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 12:02:15.267272    7488 start.go:128] duration metric: createHost completed in 43.8752411s
	I0114 12:02:15.267272    7488 start.go:83] releasing machines lock for "cilium-114511", held for 43.8761903s
	I0114 12:02:15.275258    7488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-114511
	I0114 12:02:15.483191    7488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 12:02:15.495622    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:15.496243    7488 ssh_runner.go:195] Run: cat /version.json
	I0114 12:02:15.504930    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:15.715670    7488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50116 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa Username:docker}
	I0114 12:02:15.731231    7488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50116 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa Username:docker}
	I0114 12:02:15.799526    7488 ssh_runner.go:195] Run: systemctl --version
	I0114 12:02:15.907569    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 12:02:15.933665    7488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0114 12:02:15.983271    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 12:02:16.169721    7488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 12:02:16.389862    7488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 12:02:16.413871    7488 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 12:02:16.422907    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 12:02:16.449147    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 12:02:16.512217    7488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 12:02:16.699670    7488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 12:02:16.898295    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 12:02:17.079444    7488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 12:02:17.741034    7488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 12:02:17.939318    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 12:02:18.148710    7488 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 12:02:18.179745    7488 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 12:02:18.194774    7488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 12:02:18.213548    7488 start.go:472] Will wait 60s for crictl version
	I0114 12:02:18.223091    7488 ssh_runner.go:195] Run: which crictl
	I0114 12:02:18.250089    7488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 12:02:18.332774    7488 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 12:02:18.341850    7488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 12:02:18.427131    7488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 12:02:18.501134    7488 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 12:02:18.509133    7488 cli_runner.go:164] Run: docker exec -t cilium-114511 dig +short host.docker.internal
	I0114 12:02:18.838656    7488 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 12:02:18.847638    7488 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 12:02:18.859652    7488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 12:02:18.904463    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:02:19.122457    7488 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 12:02:19.131456    7488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 12:02:19.191053    7488 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 12:02:19.191053    7488 docker.go:543] Images already preloaded, skipping extraction
	I0114 12:02:19.198059    7488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 12:02:19.267695    7488 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 12:02:19.267695    7488 cache_images.go:84] Images are preloaded, skipping loading
	I0114 12:02:19.276707    7488 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 12:02:19.431509    7488 cni.go:95] Creating CNI manager for "cilium"
	I0114 12:02:19.432099    7488 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 12:02:19.432099    7488 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-114511 NodeName:cilium-114511 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 12:02:19.432394    7488 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cilium-114511"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 12:02:19.432558    7488 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-114511 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:cilium-114511 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0114 12:02:19.444767    7488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 12:02:19.465760    7488 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 12:02:19.474767    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 12:02:19.497386    7488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (473 bytes)
	I0114 12:02:19.530379    7488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 12:02:19.567432    7488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes)
	I0114 12:02:19.622395    7488 ssh_runner.go:195] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	I0114 12:02:19.635388    7488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 12:02:19.674405    7488 certs.go:54] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511 for IP: 172.17.0.2
	I0114 12:02:19.674405    7488 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0114 12:02:19.674405    7488 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0114 12:02:19.676426    7488 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\client.key
	I0114 12:02:19.676426    7488 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\client.crt with IP's: []
	I0114 12:02:19.841431    7488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\client.crt ...
	I0114 12:02:19.841431    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\client.crt: {Name:mkaf8e80072c92404ac4518f27a424772b3a6c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:02:19.842389    7488 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\client.key ...
	I0114 12:02:19.842389    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\client.key: {Name:mk0fa9f221ab96d6597b33b378ce1f54a0d7c3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:02:19.844415    7488 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.key.7b749c5f
	I0114 12:02:19.844415    7488 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 12:02:20.080391    7488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.crt.7b749c5f ...
	I0114 12:02:20.080391    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.crt.7b749c5f: {Name:mkcfdfafe50e0693af235eddcaa75499bd4a5486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:02:20.081073    7488 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.key.7b749c5f ...
	I0114 12:02:20.082065    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.key.7b749c5f: {Name:mk9a4b762609fa41ab9b602e20bb8585f9750cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:02:20.082733    7488 certs.go:320] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.crt.7b749c5f -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.crt
	I0114 12:02:20.089763    7488 certs.go:324] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.key.7b749c5f -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.key
	I0114 12:02:20.090752    7488 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.key
	I0114 12:02:20.090752    7488 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.crt with IP's: []
	I0114 12:02:20.262296    7488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.crt ...
	I0114 12:02:20.262296    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.crt: {Name:mk99ec9cfa6b4205d420bcfc93eab37ba41e00e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:02:20.263935    7488 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.key ...
	I0114 12:02:20.263935    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.key: {Name:mka9b0980e4714e3c624fdf43d01b72e175120d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:02:20.271694    7488 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9968.pem (1338 bytes)
	W0114 12:02:20.272941    7488 certs.go:384] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9968_empty.pem, impossibly tiny 0 bytes
	I0114 12:02:20.272941    7488 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0114 12:02:20.273192    7488 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0114 12:02:20.273450    7488 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0114 12:02:20.273668    7488 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0114 12:02:20.273876    7488 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem (1708 bytes)
	I0114 12:02:20.275436    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 12:02:20.332779    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 12:02:20.397113    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 12:02:20.447734    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\cilium-114511\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 12:02:20.505205    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 12:02:20.559017    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0114 12:02:20.626489    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 12:02:20.679075    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0114 12:02:20.734472    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem --> /usr/share/ca-certificates/99682.pem (1708 bytes)
	I0114 12:02:20.788816    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 12:02:20.839908    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9968.pem --> /usr/share/ca-certificates/9968.pem (1338 bytes)
	I0114 12:02:20.893814    7488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 12:02:20.939672    7488 ssh_runner.go:195] Run: openssl version
	I0114 12:02:20.963680    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99682.pem && ln -fs /usr/share/ca-certificates/99682.pem /etc/ssl/certs/99682.pem"
	I0114 12:02:21.006219    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99682.pem
	I0114 12:02:21.018218    7488 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:21 /usr/share/ca-certificates/99682.pem
	I0114 12:02:21.027222    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99682.pem
	I0114 12:02:21.067944    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99682.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 12:02:21.098944    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 12:02:21.131991    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 12:02:21.141948    7488 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I0114 12:02:21.159992    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 12:02:21.182979    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 12:02:21.222107    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9968.pem && ln -fs /usr/share/ca-certificates/9968.pem /etc/ssl/certs/9968.pem"
	I0114 12:02:21.258097    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9968.pem
	I0114 12:02:21.269083    7488 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:21 /usr/share/ca-certificates/9968.pem
	I0114 12:02:21.279082    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9968.pem
	I0114 12:02:21.299095    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9968.pem /etc/ssl/certs/51391683.0"
	I0114 12:02:21.322086    7488 kubeadm.go:396] StartCluster: {Name:cilium-114511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-114511 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 12:02:21.329157    7488 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 12:02:21.395553    7488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 12:02:21.435236    7488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 12:02:21.456334    7488 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 12:02:21.467348    7488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 12:02:21.489349    7488 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 12:02:21.489349    7488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 12:02:21.579715    7488 kubeadm.go:317] W0114 12:02:21.576664    1236 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 12:02:21.666231    7488 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 12:02:21.844066    7488 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 12:02:46.275276    7488 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 12:02:46.275276    7488 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 12:02:46.275276    7488 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 12:02:46.276549    7488 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 12:02:46.276549    7488 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 12:02:46.276549    7488 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 12:02:46.279970    7488 out.go:204]   - Generating certificates and keys ...
	I0114 12:02:46.279970    7488 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 12:02:46.280650    7488 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 12:02:46.280650    7488 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 12:02:46.280650    7488 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 12:02:46.280650    7488 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 12:02:46.280650    7488 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 12:02:46.281663    7488 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 12:02:46.282674    7488 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cilium-114511 localhost] and IPs [172.17.0.2 127.0.0.1 ::1]
	I0114 12:02:46.282674    7488 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 12:02:46.282674    7488 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cilium-114511 localhost] and IPs [172.17.0.2 127.0.0.1 ::1]
	I0114 12:02:46.282674    7488 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 12:02:46.282674    7488 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 12:02:46.283633    7488 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 12:02:46.283633    7488 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 12:02:46.283633    7488 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 12:02:46.283633    7488 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 12:02:46.283633    7488 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 12:02:46.284519    7488 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 12:02:46.284909    7488 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 12:02:46.284909    7488 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 12:02:46.284909    7488 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 12:02:46.285564    7488 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 12:02:46.288641    7488 out.go:204]   - Booting up control plane ...
	I0114 12:02:46.288726    7488 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 12:02:46.288726    7488 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 12:02:46.289273    7488 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 12:02:46.289561    7488 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 12:02:46.290350    7488 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 12:02:46.290350    7488 kubeadm.go:317] [apiclient] All control plane components are healthy after 18.006771 seconds
	I0114 12:02:46.290350    7488 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0114 12:02:46.291249    7488 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0114 12:02:46.291249    7488 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0114 12:02:46.291974    7488 kubeadm.go:317] [mark-control-plane] Marking the node cilium-114511 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0114 12:02:46.291974    7488 kubeadm.go:317] [bootstrap-token] Using token: ea1krx.mfvgb6kvn0irxfze
	I0114 12:02:46.295152    7488 out.go:204]   - Configuring RBAC rules ...
	I0114 12:02:46.295487    7488 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0114 12:02:46.295668    7488 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0114 12:02:46.295886    7488 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0114 12:02:46.296214    7488 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0114 12:02:46.296464    7488 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0114 12:02:46.296464    7488 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0114 12:02:46.296464    7488 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0114 12:02:46.297109    7488 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0114 12:02:46.297109    7488 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0114 12:02:46.297109    7488 kubeadm.go:317] 
	I0114 12:02:46.297109    7488 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0114 12:02:46.297109    7488 kubeadm.go:317] 
	I0114 12:02:46.297109    7488 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0114 12:02:46.297109    7488 kubeadm.go:317] 
	I0114 12:02:46.297699    7488 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0114 12:02:46.297699    7488 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0114 12:02:46.297699    7488 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0114 12:02:46.297699    7488 kubeadm.go:317] 
	I0114 12:02:46.297699    7488 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0114 12:02:46.297699    7488 kubeadm.go:317] 
	I0114 12:02:46.298485    7488 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0114 12:02:46.298561    7488 kubeadm.go:317] 
	I0114 12:02:46.298608    7488 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0114 12:02:46.298608    7488 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0114 12:02:46.298608    7488 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0114 12:02:46.298608    7488 kubeadm.go:317] 
	I0114 12:02:46.298608    7488 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0114 12:02:46.299430    7488 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0114 12:02:46.299430    7488 kubeadm.go:317] 
	I0114 12:02:46.299430    7488 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token ea1krx.mfvgb6kvn0irxfze \
	I0114 12:02:46.299958    7488 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:d4abf46d00e7a9b2779f6d5264f268d71e7682a3ed209a13fd506918ad0491d1 \
	I0114 12:02:46.300061    7488 kubeadm.go:317] 	--control-plane 
	I0114 12:02:46.300061    7488 kubeadm.go:317] 
	I0114 12:02:46.300192    7488 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0114 12:02:46.300192    7488 kubeadm.go:317] 
	I0114 12:02:46.300192    7488 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ea1krx.mfvgb6kvn0irxfze \
	I0114 12:02:46.300948    7488 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:d4abf46d00e7a9b2779f6d5264f268d71e7682a3ed209a13fd506918ad0491d1 
	I0114 12:02:46.301016    7488 cni.go:95] Creating CNI manager for "cilium"
	I0114 12:02:46.303165    7488 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0114 12:02:46.319491    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0114 12:02:46.452800    7488 cilium.go:832] Using pod CIDR: 10.244.0.0/16
	I0114 12:02:46.452800    7488 cilium.go:843] cilium options: {PodSubnet:10.244.0.0/16}
	I0114 12:02:46.452800    7488 cilium.go:847] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: cluster
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: "1"
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	  - apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # To remove node taints
	  - nodes
	  # To set NetworkUnavailable false on startup
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9879
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9879
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.12.3@sha256:30de50c4dc0a1e1077e9e7917a54d5cab253058b3f779822aec00f5c817ca826"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.12.3@sha256:30de50c4dc0a1e1077e9e7917a54d5cab253058b3f779822aec00f5c817ca826"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.12.3@sha256:30de50c4dc0a1e1077e9e7917a54d5cab253058b3f779822aec00f5c817ca826"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.12.3@sha256:816ec1da586139b595eeb31932c61a7c13b07fb4a0255341c0e0f18608e84eff"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0114 12:02:46.452800    7488 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 12:02:46.453693    7488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23434 bytes)
	I0114 12:02:46.672103    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 12:02:49.284052    7488 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.6108973s)
	I0114 12:02:49.284052    7488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 12:02:49.298042    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=cilium-114511 minikube.k8s.io/updated_at=2023_01_14T12_02_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:49.301038    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:49.304054    7488 ops.go:34] apiserver oom_adj: -16
	I0114 12:02:49.678680    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:50.488763    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:50.991050    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:51.492306    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:51.999610    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:52.496741    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:52.993621    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:53.490850    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:53.991647    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:54.492376    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:54.993143    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:55.489686    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:55.993661    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:56.490850    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:56.995275    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:57.497021    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:57.988569    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:58.493859    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:58.992591    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:02:59.590005    7488 kubeadm.go:1067] duration metric: took 10.3058452s to wait for elevateKubeSystemPrivileges.
	I0114 12:02:59.590005    7488 kubeadm.go:398] StartCluster complete in 38.2675209s
	I0114 12:02:59.590005    7488 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:02:59.590978    7488 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 12:02:59.597954    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:03:00.367342    7488 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-114511" rescaled to 1
	I0114 12:03:00.367342    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 12:03:00.367342    7488 start.go:212] Will wait 5m0s for node &{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 12:03:00.373420    7488 out.go:177] * Verifying Kubernetes components...
	I0114 12:03:00.367342    7488 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0114 12:03:00.368343    7488 config.go:180] Loaded profile config "cilium-114511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:03:00.373420    7488 addons.go:65] Setting storage-provisioner=true in profile "cilium-114511"
	I0114 12:03:00.373420    7488 addons.go:65] Setting default-storageclass=true in profile "cilium-114511"
	I0114 12:03:00.373420    7488 addons.go:227] Setting addon storage-provisioner=true in "cilium-114511"
	W0114 12:03:00.373420    7488 addons.go:236] addon storage-provisioner should already be in state true
	I0114 12:03:00.373420    7488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-114511"
	I0114 12:03:00.373420    7488 host.go:66] Checking if "cilium-114511" exists ...
	I0114 12:03:00.403327    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 12:03:00.404485    7488 cli_runner.go:164] Run: docker container inspect cilium-114511 --format={{.State.Status}}
	I0114 12:03:00.407325    7488 cli_runner.go:164] Run: docker container inspect cilium-114511 --format={{.State.Status}}
	I0114 12:03:00.681623    7488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 12:03:00.684634    7488 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 12:03:00.684634    7488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 12:03:00.695633    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:03:00.730453    7488 addons.go:227] Setting addon default-storageclass=true in "cilium-114511"
	W0114 12:03:00.730453    7488 addons.go:236] addon default-storageclass should already be in state true
	I0114 12:03:00.730453    7488 host.go:66] Checking if "cilium-114511" exists ...
	I0114 12:03:00.738452    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0114 12:03:00.752456    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:03:00.762462    7488 cli_runner.go:164] Run: docker container inspect cilium-114511 --format={{.State.Status}}
	I0114 12:03:00.991212    7488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50116 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa Username:docker}
	I0114 12:03:01.025225    7488 node_ready.go:35] waiting up to 5m0s for node "cilium-114511" to be "Ready" ...
	I0114 12:03:01.038215    7488 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 12:03:01.038215    7488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 12:03:01.047133    7488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-114511
	I0114 12:03:01.062848    7488 node_ready.go:49] node "cilium-114511" has status "Ready":"True"
	I0114 12:03:01.062848    7488 node_ready.go:38] duration metric: took 37.6231ms waiting for node "cilium-114511" to be "Ready" ...
	I0114 12:03:01.062848    7488 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 12:03:01.175835    7488 pod_ready.go:78] waiting up to 5m0s for pod "cilium-4lmht" in "kube-system" namespace to be "Ready" ...
	I0114 12:03:01.289842    7488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50116 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\cilium-114511\id_rsa Username:docker}
	I0114 12:03:01.690452    7488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 12:03:02.178499    7488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 12:03:03.365460    7488 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.6269812s)
	I0114 12:03:03.365460    7488 start.go:833] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0114 12:03:03.477704    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:04.773239    7488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.5947127s)
	I0114 12:03:04.773239    7488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.082755s)
	I0114 12:03:04.777632    7488 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0114 12:03:04.781270    7488 addons.go:488] enableAddons completed in 4.4138821s
	I0114 12:03:05.898171    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:07.954119    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:10.388999    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:12.886769    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:14.892022    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:16.895216    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:19.375285    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:21.386991    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:23.883476    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:26.386452    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:28.390364    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:30.876548    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:32.888530    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:35.391451    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:37.875936    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:39.894407    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:42.422709    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:44.891262    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:46.900053    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:49.396937    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:51.903642    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:54.384341    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:03:56.387296    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:02.394575    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:04.886714    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:06.887820    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:09.384876    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:11.891085    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:14.386329    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:16.873819    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:19.475691    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:21.878889    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:24.433257    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:26.889218    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:28.955099    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:31.458140    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:34.498344    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:37.248329    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:39.380032    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:41.385521    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:43.456158    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:49.960158    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:52.390458    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:54.953834    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:57.381444    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:04:59.385464    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:01.890007    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:04.390247    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:06.888801    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:09.455751    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:11.879810    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:15.345159    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:17.389712    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:19.562955    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:22.063594    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:24.251258    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:26.393476    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:28.892463    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:39.058083    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:41.383972    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:43.389415    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:45.887520    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:47.895263    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:50.386590    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:52.881440    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:54.882577    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:57.468459    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:05:59.883459    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:02.388708    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:04.390749    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:07.651367    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:11.836637    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:13.887354    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:16.384116    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:18.402470    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:20.893828    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:22.895407    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:25.375519    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:27.391295    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:29.881768    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:31.887994    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:34.382526    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:36.886564    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:39.386855    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:41.882644    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:44.385408    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:46.400778    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:48.881804    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:50.886283    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:53.388859    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:55.896062    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:58.399807    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:00.882928    7488 pod_ready.go:102] pod "cilium-4lmht" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:01.402531    7488 pod_ready.go:81] duration metric: took 4m0.2242096s waiting for pod "cilium-4lmht" in "kube-system" namespace to be "Ready" ...
	E0114 12:07:01.402531    7488 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0114 12:07:01.402531    7488 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-6b885c4575-8vrwb" in "kube-system" namespace to be "Ready" ...
	I0114 12:07:01.415514    7488 pod_ready.go:92] pod "cilium-operator-6b885c4575-8vrwb" in "kube-system" namespace has status "Ready":"True"
	I0114 12:07:01.415514    7488 pod_ready.go:81] duration metric: took 12.9822ms waiting for pod "cilium-operator-6b885c4575-8vrwb" in "kube-system" namespace to be "Ready" ...
	I0114 12:07:01.415514    7488 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-8dms9" in "kube-system" namespace to be "Ready" ...
	I0114 12:07:01.427483    7488 pod_ready.go:97] error getting pod "coredns-565d847f94-8dms9" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-8dms9" not found
	I0114 12:07:01.427483    7488 pod_ready.go:81] duration metric: took 11.9694ms waiting for pod "coredns-565d847f94-8dms9" in "kube-system" namespace to be "Ready" ...
	E0114 12:07:01.427483    7488 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-565d847f94-8dms9" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-8dms9" not found
	I0114 12:07:01.427483    7488 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-m62vf" in "kube-system" namespace to be "Ready" ...
	I0114 12:07:03.470634    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:05.482910    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:07.488653    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:09.966680    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:11.975069    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:14.478063    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:16.483447    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:18.980750    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:21.473454    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:23.976347    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:26.478496    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:28.979354    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:31.471618    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:33.486061    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:35.975522    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:38.474751    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:40.476387    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:42.481666    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:44.987454    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:47.480937    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:49.969154    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:51.974958    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:54.469717    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:56.475935    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:58.479694    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:00.975603    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:02.977526    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:05.488384    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:07.979943    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:10.471113    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:12.474268    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:14.485815    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:16.969871    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:18.976778    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:21.477253    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:23.481601    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:25.482116    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:27.969184    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:29.978373    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:31.979841    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:34.472396    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:36.480543    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:38.481227    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:40.973807    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:43.487720    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:45.974546    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:47.981897    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:50.475263    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:52.483098    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:54.974362    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:57.474178    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:59.501122    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:01.973812    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:04.478592    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:06.982003    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:09.466719    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:11.481350    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:13.972820    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:15.984939    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:18.472592    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:20.480237    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:22.980086    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:25.486213    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:27.972885    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:29.979097    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:32.487367    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:34.977072    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:36.982250    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:39.473239    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:41.482814    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:43.982002    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:46.475574    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:48.476779    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:50.480462    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:52.969486    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:54.970348    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:56.988663    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:59.467586    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:01.477347    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:03.488655    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:05.979739    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:07.980403    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:10.476125    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:12.975769    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:14.976907    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:16.985450    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:19.479308    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:21.973380    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:24.473707    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:26.976767    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:29.485385    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:31.977155    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:34.472773    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:36.978831    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:38.979571    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:41.471520    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:43.485124    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:45.990227    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:48.486094    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:50.987172    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:53.473865    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:55.480240    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:57.485684    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:59.994933    7488 pod_ready.go:102] pod "coredns-565d847f94-m62vf" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:01.501844    7488 pod_ready.go:81] duration metric: took 4m0.0718081s waiting for pod "coredns-565d847f94-m62vf" in "kube-system" namespace to be "Ready" ...
	E0114 12:11:01.501844    7488 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0114 12:11:01.501844    7488 pod_ready.go:38] duration metric: took 8m0.4329542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 12:11:01.504873    7488 out.go:177] 
	W0114 12:11:01.507836    7488 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0114 12:11:01.507836    7488 out.go:239] * 
	* 
	W0114 12:11:01.509850    7488 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 12:11:01.512833    7488 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (574.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (613.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-114511 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker
E0114 12:04:08.464903    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:04:19.716709    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-114511 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (10m12.7538434s)

                                                
                                                
-- stdout --
	* [calico-114511] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node calico-114511 in cluster calico-114511
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 12:04:03.603143    5748 out.go:296] Setting OutFile to fd 1532 ...
	I0114 12:04:03.697125    5748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 12:04:03.697125    5748 out.go:309] Setting ErrFile to fd 1600...
	I0114 12:04:03.697125    5748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 12:04:03.725127    5748 out.go:303] Setting JSON to false
	I0114 12:04:03.729132    5748 start.go:125] hostinfo: {"hostname":"minikube2","uptime":9455,"bootTime":1673688388,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0114 12:04:03.729132    5748 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 12:04:03.733126    5748 out.go:177] * [calico-114511] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	I0114 12:04:03.736127    5748 notify.go:220] Checking for updates...
	I0114 12:04:03.738128    5748 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 12:04:03.741139    5748 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0114 12:04:03.744121    5748 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 12:04:03.746200    5748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 12:04:03.751144    5748 config.go:180] Loaded profile config "cilium-114511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:04:03.751144    5748 config.go:180] Loaded profile config "embed-certs-115542": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:04:03.752184    5748 config.go:180] Loaded profile config "kindnet-114509": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:04:03.752184    5748 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 12:04:04.197025    5748 docker.go:138] docker version: linux-20.10.21
	I0114 12:04:04.210020    5748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 12:04:05.014718    5748 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:61 SystemTime:2023-01-14 12:04:04.4212829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 12:04:05.020749    5748 out.go:177] * Using the docker driver based on user configuration
	I0114 12:04:05.023716    5748 start.go:294] selected driver: docker
	I0114 12:04:05.023716    5748 start.go:838] validating driver "docker" against <nil>
	I0114 12:04:05.023716    5748 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 12:04:05.108722    5748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 12:04:05.931751    5748 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:61 SystemTime:2023-01-14 12:04:05.351488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 12:04:05.931751    5748 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 12:04:05.932724    5748 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 12:04:05.935731    5748 out.go:177] * Using Docker Desktop driver with root privileges
	I0114 12:04:05.937746    5748 cni.go:95] Creating CNI manager for "calico"
	I0114 12:04:05.937746    5748 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0114 12:04:05.937746    5748 start_flags.go:319] config:
	{Name:calico-114511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-114511 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 12:04:05.941736    5748 out.go:177] * Starting control plane node calico-114511 in cluster calico-114511
	I0114 12:04:05.943741    5748 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 12:04:05.947733    5748 out.go:177] * Pulling base image ...
	I0114 12:04:05.950727    5748 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 12:04:05.950727    5748 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 12:04:05.950727    5748 preload.go:148] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 12:04:05.950727    5748 cache.go:57] Caching tarball of preloaded images
	I0114 12:04:05.950727    5748 preload.go:174] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 12:04:05.950727    5748 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 12:04:05.951726    5748 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\config.json ...
	I0114 12:04:05.951726    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\config.json: {Name:mk7d61a640a6d772ab508a8b74694828717dc6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:04:06.217745    5748 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 12:04:06.217745    5748 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 12:04:06.217745    5748 cache.go:193] Successfully downloaded all kic artifacts
	I0114 12:04:06.217745    5748 start.go:364] acquiring machines lock for calico-114511: {Name:mk3c86f7c5e510a6a6466d70cc2e7aeee6a6d951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 12:04:06.217745    5748 start.go:368] acquired machines lock for "calico-114511" in 0s
	I0114 12:04:06.217745    5748 start.go:93] Provisioning new machine with config: &{Name:calico-114511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-114511 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 12:04:06.217745    5748 start.go:125] createHost starting for "" (driver="docker")
	I0114 12:04:06.222728    5748 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0114 12:04:06.223765    5748 start.go:159] libmachine.API.Create for "calico-114511" (driver="docker")
	I0114 12:04:06.223765    5748 client.go:168] LocalClient.Create starting
	I0114 12:04:06.223765    5748 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0114 12:04:06.224742    5748 main.go:134] libmachine: Decoding PEM data...
	I0114 12:04:06.224742    5748 main.go:134] libmachine: Parsing certificate...
	I0114 12:04:06.224742    5748 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0114 12:04:06.224742    5748 main.go:134] libmachine: Decoding PEM data...
	I0114 12:04:06.224742    5748 main.go:134] libmachine: Parsing certificate...
	I0114 12:04:06.242738    5748 cli_runner.go:164] Run: docker network inspect calico-114511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 12:04:06.531795    5748 cli_runner.go:211] docker network inspect calico-114511 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 12:04:06.542741    5748 network_create.go:280] running [docker network inspect calico-114511] to gather additional debugging logs...
	I0114 12:04:06.542741    5748 cli_runner.go:164] Run: docker network inspect calico-114511
	W0114 12:04:06.784744    5748 cli_runner.go:211] docker network inspect calico-114511 returned with exit code 1
	I0114 12:04:06.784744    5748 network_create.go:283] error running [docker network inspect calico-114511]: docker network inspect calico-114511: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-114511
	I0114 12:04:06.784744    5748 network_create.go:285] output of [docker network inspect calico-114511]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-114511
	
	** /stderr **
	I0114 12:04:06.798739    5748 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 12:04:07.111812    5748 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000272858] misses:0}
	I0114 12:04:07.111812    5748 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:04:07.111812    5748 network_create.go:123] attempt to create docker network calico-114511 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0114 12:04:07.123825    5748 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-114511 calico-114511
	W0114 12:04:07.385807    5748 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-114511 calico-114511 returned with exit code 1
	W0114 12:04:07.385807    5748 network_create.go:115] failed to create docker network calico-114511 192.168.49.0/24, will retry: subnet is taken
	I0114 12:04:07.412826    5748 network.go:268] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000272858] amended:false}} dirty:map[] misses:0}
	I0114 12:04:07.412826    5748 network.go:213] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:04:07.437804    5748 network.go:277] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000272858] amended:true}} dirty:map[192.168.49.0:0xc000272858 192.168.58.0:0xc0002728f0] misses:0}
	I0114 12:04:07.437804    5748 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:04:07.437804    5748 network_create.go:123] attempt to create docker network calico-114511 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0114 12:04:07.458821    5748 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-114511 calico-114511
	W0114 12:04:07.704825    5748 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-114511 calico-114511 returned with exit code 1
	W0114 12:04:07.704825    5748 network_create.go:115] failed to create docker network calico-114511 192.168.58.0/24, will retry: subnet is taken
	I0114 12:04:07.736812    5748 network.go:268] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000272858] amended:true}} dirty:map[192.168.49.0:0xc000272858 192.168.58.0:0xc0002728f0] misses:1}
	I0114 12:04:07.736812    5748 network.go:213] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:04:07.761802    5748 network.go:277] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000272858] amended:true}} dirty:map[192.168.49.0:0xc000272858 192.168.58.0:0xc0002728f0 192.168.67.0:0xc00014af30] misses:1}
	I0114 12:04:07.761802    5748 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 12:04:07.761802    5748 network_create.go:123] attempt to create docker network calico-114511 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0114 12:04:07.776805    5748 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-114511 calico-114511
	I0114 12:04:08.189552    5748 network_create.go:107] docker network calico-114511 192.168.67.0/24 created
	I0114 12:04:08.189552    5748 kic.go:117] calculated static IP "192.168.67.2" for the "calico-114511" container
	I0114 12:04:08.223496    5748 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 12:04:08.536889    5748 cli_runner.go:164] Run: docker volume create calico-114511 --label name.minikube.sigs.k8s.io=calico-114511 --label created_by.minikube.sigs.k8s.io=true
	I0114 12:04:08.827872    5748 oci.go:103] Successfully created a docker volume calico-114511
	I0114 12:04:08.838914    5748 cli_runner.go:164] Run: docker run --rm --name calico-114511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-114511 --entrypoint /usr/bin/test -v calico-114511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 12:04:11.199976    5748 cli_runner.go:217] Completed: docker run --rm --name calico-114511-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-114511 --entrypoint /usr/bin/test -v calico-114511:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib: (2.3610375s)
	I0114 12:04:11.199976    5748 oci.go:107] Successfully prepared a docker volume calico-114511
	I0114 12:04:11.199976    5748 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 12:04:11.199976    5748 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 12:04:11.212993    5748 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-114511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 12:04:37.515497    5748 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-114511:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (26.3022279s)
	I0114 12:04:37.515497    5748 kic.go:199] duration metric: took 26.315245 seconds to extract preloaded images to volume
	I0114 12:04:37.522487    5748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 12:04:38.321711    5748 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:61 SystemTime:2023-01-14 12:04:37.7907169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 12:04:38.331132    5748 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 12:04:39.124576    5748 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-114511 --name calico-114511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-114511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-114511 --network calico-114511 --ip 192.168.67.2 --volume calico-114511:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 12:04:40.891123    5748 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-114511 --name calico-114511 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-114511 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-114511 --network calico-114511 --ip 192.168.67.2 --volume calico-114511:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c: (1.766528s)
	I0114 12:04:40.904098    5748 cli_runner.go:164] Run: docker container inspect calico-114511 --format={{.State.Running}}
	I0114 12:04:41.157088    5748 cli_runner.go:164] Run: docker container inspect calico-114511 --format={{.State.Status}}
	I0114 12:04:41.421843    5748 cli_runner.go:164] Run: docker exec calico-114511 stat /var/lib/dpkg/alternatives/iptables
	I0114 12:04:41.796273    5748 oci.go:144] the created container "calico-114511" has a running status.
	I0114 12:04:41.796273    5748 kic.go:221] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa...
	I0114 12:04:42.005167    5748 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 12:04:42.400512    5748 cli_runner.go:164] Run: docker container inspect calico-114511 --format={{.State.Status}}
	I0114 12:04:42.643817    5748 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 12:04:42.643817    5748 kic_runner.go:114] Args: [docker exec --privileged calico-114511 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 12:04:43.021801    5748 kic.go:261] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa...
	I0114 12:04:43.659848    5748 cli_runner.go:164] Run: docker container inspect calico-114511 --format={{.State.Status}}
	I0114 12:04:43.907680    5748 machine.go:88] provisioning docker machine ...
	I0114 12:04:43.907680    5748 ubuntu.go:169] provisioning hostname "calico-114511"
	I0114 12:04:43.914896    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:44.147521    5748 main.go:134] libmachine: Using SSH client type: native
	I0114 12:04:44.159353    5748 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50373 <nil> <nil>}
	I0114 12:04:44.159353    5748 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-114511 && echo "calico-114511" | sudo tee /etc/hostname
	I0114 12:04:44.464929    5748 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-114511
	
	I0114 12:04:44.480953    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:44.713899    5748 main.go:134] libmachine: Using SSH client type: native
	I0114 12:04:44.713899    5748 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50373 <nil> <nil>}
	I0114 12:04:44.713899    5748 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-114511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-114511/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-114511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 12:04:44.859219    5748 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 12:04:44.859298    5748 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0114 12:04:44.859298    5748 ubuntu.go:177] setting up certificates
	I0114 12:04:44.859298    5748 provision.go:83] configureAuth start
	I0114 12:04:44.876223    5748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-114511
	I0114 12:04:45.083853    5748 provision.go:138] copyHostCerts
	I0114 12:04:45.083853    5748 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I0114 12:04:45.083853    5748 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I0114 12:04:45.084808    5748 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0114 12:04:45.085807    5748 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I0114 12:04:45.085807    5748 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I0114 12:04:45.085807    5748 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0114 12:04:45.086842    5748 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I0114 12:04:45.086842    5748 exec_runner.go:207] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I0114 12:04:45.086842    5748 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1675 bytes)
	I0114 12:04:45.087805    5748 provision.go:112] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-114511 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-114511]
	I0114 12:04:45.263785    5748 provision.go:172] copyRemoteCerts
	I0114 12:04:45.275508    5748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 12:04:45.285312    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:45.491330    5748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50373 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa Username:docker}
	I0114 12:04:45.640751    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 12:04:45.700755    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0114 12:04:45.752934    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 12:04:45.810437    5748 provision.go:86] duration metric: configureAuth took 951.1288ms
	I0114 12:04:45.810437    5748 ubuntu.go:193] setting minikube options for container-runtime
	I0114 12:04:45.811220    5748 config.go:180] Loaded profile config "calico-114511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:04:45.820819    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:46.026863    5748 main.go:134] libmachine: Using SSH client type: native
	I0114 12:04:46.027560    5748 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50373 <nil> <nil>}
	I0114 12:04:46.027560    5748 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 12:04:46.171931    5748 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 12:04:46.171931    5748 ubuntu.go:71] root file system type: overlay
	I0114 12:04:46.171931    5748 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 12:04:46.184913    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:46.388148    5748 main.go:134] libmachine: Using SSH client type: native
	I0114 12:04:46.389081    5748 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50373 <nil> <nil>}
	I0114 12:04:46.389081    5748 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 12:04:46.636496    5748 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 12:04:46.643546    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:46.862529    5748 main.go:134] libmachine: Using SSH client type: native
	I0114 12:04:46.863516    5748 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x68fea0] 0x692e20 <nil>  [] 0s} 127.0.0.1 50373 <nil> <nil>}
	I0114 12:04:46.863516    5748 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 12:04:51.078506    5748 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 12:04:46.622533000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0114 12:04:51.078506    5748 machine.go:91] provisioned docker machine in 7.1707501s
	I0114 12:04:51.078506    5748 client.go:171] LocalClient.Create took 44.8542697s
	I0114 12:04:51.078506    5748 start.go:167] duration metric: libmachine.API.Create for "calico-114511" took 44.8542697s
	I0114 12:04:51.078506    5748 start.go:300] post-start starting for "calico-114511" (driver="docker")
	I0114 12:04:51.078506    5748 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 12:04:51.094501    5748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 12:04:51.102516    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:51.355882    5748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50373 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa Username:docker}
	I0114 12:04:51.529401    5748 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 12:04:51.541419    5748 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 12:04:51.541419    5748 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 12:04:51.541419    5748 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 12:04:51.541419    5748 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 12:04:51.541419    5748 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0114 12:04:51.541419    5748 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0114 12:04:51.542396    5748 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem -> 99682.pem in /etc/ssl/certs
	I0114 12:04:51.556403    5748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 12:04:51.581412    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem --> /etc/ssl/certs/99682.pem (1708 bytes)
	I0114 12:04:51.630403    5748 start.go:303] post-start completed in 551.8915ms
	I0114 12:04:51.644401    5748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-114511
	I0114 12:04:51.915319    5748 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\config.json ...
	I0114 12:04:51.938296    5748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 12:04:51.946290    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:52.259450    5748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50373 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa Username:docker}
	I0114 12:04:52.410450    5748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 12:04:52.424463    5748 start.go:128] duration metric: createHost completed in 46.2052439s
	I0114 12:04:52.424463    5748 start.go:83] releasing machines lock for "calico-114511", held for 46.2062323s
	I0114 12:04:52.433449    5748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-114511
	I0114 12:04:52.716091    5748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 12:04:52.726129    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:52.729079    5748 ssh_runner.go:195] Run: cat /version.json
	I0114 12:04:52.743092    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:53.020101    5748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50373 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa Username:docker}
	I0114 12:04:53.031077    5748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50373 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa Username:docker}
	I0114 12:04:53.298089    5748 ssh_runner.go:195] Run: systemctl --version
	I0114 12:04:53.329089    5748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 12:04:53.376092    5748 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0114 12:04:53.439098    5748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 12:04:53.648095    5748 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 12:04:53.935111    5748 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 12:04:53.996431    5748 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 12:04:54.013479    5748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 12:04:54.045439    5748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 12:04:54.118439    5748 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 12:04:54.364467    5748 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 12:04:54.718832    5748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 12:04:55.086837    5748 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 12:04:55.811524    5748 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 12:04:56.061550    5748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 12:04:56.306851    5748 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 12:04:56.338838    5748 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 12:04:56.352852    5748 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 12:04:56.366884    5748 start.go:472] Will wait 60s for crictl version
	I0114 12:04:56.388842    5748 ssh_runner.go:195] Run: which crictl
	I0114 12:04:56.414838    5748 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 12:04:56.510853    5748 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 12:04:56.520844    5748 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 12:04:56.632842    5748 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 12:04:56.722857    5748 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 12:04:56.731856    5748 cli_runner.go:164] Run: docker exec -t calico-114511 dig +short host.docker.internal
	I0114 12:04:57.246464    5748 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 12:04:57.264454    5748 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 12:04:57.279475    5748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 12:04:57.324453    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-114511
	I0114 12:04:57.594461    5748 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 12:04:57.609454    5748 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 12:04:57.720791    5748 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 12:04:57.720902    5748 docker.go:543] Images already preloaded, skipping extraction
	I0114 12:04:57.735460    5748 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 12:04:57.823448    5748 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 12:04:57.823448    5748 cache_images.go:84] Images are preloaded, skipping loading
	I0114 12:04:57.839458    5748 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 12:04:58.065087    5748 cni.go:95] Creating CNI manager for "calico"
	I0114 12:04:58.065087    5748 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 12:04:58.065087    5748 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-114511 NodeName:calico-114511 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 12:04:58.066088    5748 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-114511"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 12:04:58.066088    5748 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-114511 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-114511 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0114 12:04:58.085089    5748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 12:04:58.116121    5748 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 12:04:58.139087    5748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 12:04:58.170099    5748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I0114 12:04:58.231099    5748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 12:04:58.284336    5748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I0114 12:04:58.344352    5748 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 12:04:58.358379    5748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 12:04:58.411343    5748 certs.go:54] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511 for IP: 192.168.67.2
	I0114 12:04:58.411343    5748 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0114 12:04:58.412364    5748 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0114 12:04:58.412364    5748 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\client.key
	I0114 12:04:58.413358    5748 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\client.crt with IP's: []
	I0114 12:04:58.899213    5748 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\client.crt ...
	I0114 12:04:58.899213    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\client.crt: {Name:mk31478c4aa6db89ad597b2c1db39ae0e89e13c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:04:58.901209    5748 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\client.key ...
	I0114 12:04:58.901209    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\client.key: {Name:mkd6332de05aec13c3742a23e7f6c140257dc352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:04:58.902206    5748 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.key.c7fa3a9e
	I0114 12:04:58.902206    5748 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 12:04:59.142463    5748 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.crt.c7fa3a9e ...
	I0114 12:04:59.142463    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.crt.c7fa3a9e: {Name:mk64986e06c909632e05514d8cc4cc6957ee2b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:04:59.143228    5748 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.key.c7fa3a9e ...
	I0114 12:04:59.143228    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.key.c7fa3a9e: {Name:mk0bbd4f71fece17a2547cf7fc310cc39ab85a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:04:59.145218    5748 certs.go:320] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.crt
	I0114 12:04:59.155233    5748 certs.go:324] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.key
	I0114 12:04:59.158234    5748 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.key
	I0114 12:04:59.158234    5748 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.crt with IP's: []
	I0114 12:04:59.744413    5748 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.crt ...
	I0114 12:04:59.744413    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.crt: {Name:mkf21e3b4ebac7c95f3885ebc755268a5ae7ec88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:04:59.745419    5748 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.key ...
	I0114 12:04:59.745419    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.key: {Name:mkdd6c69ee0c47cc85de26314cf17b2acd443cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:04:59.757433    5748 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9968.pem (1338 bytes)
	W0114 12:04:59.758407    5748 certs.go:384] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9968_empty.pem, impossibly tiny 0 bytes
	I0114 12:04:59.758407    5748 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0114 12:04:59.758407    5748 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0114 12:04:59.759457    5748 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0114 12:04:59.759457    5748 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0114 12:04:59.759457    5748 certs.go:388] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem (1708 bytes)
	I0114 12:04:59.761413    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 12:04:59.846685    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 12:04:59.929663    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 12:04:59.994664    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\calico-114511\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 12:05:00.068686    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 12:05:00.134799    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0114 12:05:00.218267    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 12:05:00.278248    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0114 12:05:00.469479    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 12:05:00.529474    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\9968.pem --> /usr/share/ca-certificates/9968.pem (1338 bytes)
	I0114 12:05:00.590469    5748 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\99682.pem --> /usr/share/ca-certificates/99682.pem (1708 bytes)
	I0114 12:05:00.668458    5748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 12:05:00.730463    5748 ssh_runner.go:195] Run: openssl version
	I0114 12:05:00.759461    5748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 12:05:00.811153    5748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 12:05:00.823165    5748 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I0114 12:05:00.842161    5748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 12:05:00.877166    5748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 12:05:00.921152    5748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9968.pem && ln -fs /usr/share/ca-certificates/9968.pem /etc/ssl/certs/9968.pem"
	I0114 12:05:00.965145    5748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9968.pem
	I0114 12:05:00.979162    5748 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:21 /usr/share/ca-certificates/9968.pem
	I0114 12:05:01.005884    5748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9968.pem
	I0114 12:05:01.032882    5748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9968.pem /etc/ssl/certs/51391683.0"
	I0114 12:05:01.082889    5748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99682.pem && ln -fs /usr/share/ca-certificates/99682.pem /etc/ssl/certs/99682.pem"
	I0114 12:05:01.124875    5748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99682.pem
	I0114 12:05:01.136876    5748 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:21 /usr/share/ca-certificates/99682.pem
	I0114 12:05:01.150880    5748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99682.pem
	I0114 12:05:01.186877    5748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99682.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 12:05:01.212897    5748 kubeadm.go:396] StartCluster: {Name:calico-114511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-114511 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 12:05:01.226876    5748 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 12:05:01.325893    5748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 12:05:01.365888    5748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 12:05:01.393897    5748 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 12:05:01.410898    5748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 12:05:01.434928    5748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 12:05:01.434928    5748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 12:05:01.573199    5748 kubeadm.go:317] W0114 12:05:01.569119    1232 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 12:05:01.688497    5748 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 12:05:01.960821    5748 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 12:05:52.975224    5748 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 12:05:52.975224    5748 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 12:05:52.976230    5748 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 12:05:52.976230    5748 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 12:05:52.976230    5748 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 12:05:52.977227    5748 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 12:05:52.981210    5748 out.go:204]   - Generating certificates and keys ...
	I0114 12:05:52.981210    5748 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 12:05:52.981210    5748 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 12:05:52.982209    5748 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 12:05:52.982209    5748 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 12:05:52.982209    5748 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 12:05:52.982209    5748 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 12:05:52.982209    5748 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 12:05:52.982209    5748 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-114511 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 12:05:52.983206    5748 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 12:05:52.983206    5748 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-114511 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 12:05:52.983206    5748 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 12:05:52.983206    5748 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 12:05:52.983206    5748 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 12:05:52.983206    5748 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 12:05:52.983206    5748 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 12:05:52.984198    5748 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 12:05:52.984198    5748 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 12:05:52.984198    5748 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 12:05:52.984198    5748 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 12:05:52.985190    5748 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 12:05:52.985190    5748 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 12:05:52.985190    5748 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 12:05:52.988200    5748 out.go:204]   - Booting up control plane ...
	I0114 12:05:52.988200    5748 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 12:05:52.988200    5748 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 12:05:52.989211    5748 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 12:05:52.989211    5748 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 12:05:52.989211    5748 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 12:05:52.989211    5748 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 12:05:52.990201    5748 kubeadm.go:317] [apiclient] All control plane components are healthy after 43.008724 seconds
	I0114 12:05:52.990201    5748 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0114 12:05:52.990201    5748 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0114 12:05:52.991214    5748 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0114 12:05:52.991214    5748 kubeadm.go:317] [mark-control-plane] Marking the node calico-114511 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0114 12:05:52.991214    5748 kubeadm.go:317] [bootstrap-token] Using token: uiah1z.gqivhjrb2szcg5ez
	I0114 12:05:52.994208    5748 out.go:204]   - Configuring RBAC rules ...
	I0114 12:05:52.995218    5748 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0114 12:05:52.995218    5748 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0114 12:05:52.995218    5748 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0114 12:05:52.996219    5748 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0114 12:05:52.996219    5748 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0114 12:05:52.996219    5748 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0114 12:05:52.997258    5748 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0114 12:05:52.997258    5748 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0114 12:05:52.997258    5748 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0114 12:05:52.997258    5748 kubeadm.go:317] 
	I0114 12:05:52.998213    5748 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0114 12:05:52.998213    5748 kubeadm.go:317] 
	I0114 12:05:52.998213    5748 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0114 12:05:52.998213    5748 kubeadm.go:317] 
	I0114 12:05:52.998213    5748 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0114 12:05:52.998213    5748 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0114 12:05:52.998213    5748 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0114 12:05:52.998213    5748 kubeadm.go:317] 
	I0114 12:05:52.999205    5748 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0114 12:05:52.999205    5748 kubeadm.go:317] 
	I0114 12:05:52.999205    5748 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0114 12:05:52.999205    5748 kubeadm.go:317] 
	I0114 12:05:52.999205    5748 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0114 12:05:52.999205    5748 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0114 12:05:52.999205    5748 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0114 12:05:52.999205    5748 kubeadm.go:317] 
	I0114 12:05:53.000208    5748 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0114 12:05:53.000208    5748 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0114 12:05:53.000208    5748 kubeadm.go:317] 
	I0114 12:05:53.000208    5748 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token uiah1z.gqivhjrb2szcg5ez \
	I0114 12:05:53.000208    5748 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:d4abf46d00e7a9b2779f6d5264f268d71e7682a3ed209a13fd506918ad0491d1 \
	I0114 12:05:53.000208    5748 kubeadm.go:317] 	--control-plane 
	I0114 12:05:53.001207    5748 kubeadm.go:317] 
	I0114 12:05:53.001207    5748 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0114 12:05:53.001207    5748 kubeadm.go:317] 
	I0114 12:05:53.001207    5748 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token uiah1z.gqivhjrb2szcg5ez \
	I0114 12:05:53.001207    5748 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:d4abf46d00e7a9b2779f6d5264f268d71e7682a3ed209a13fd506918ad0491d1 
	I0114 12:05:53.001207    5748 cni.go:95] Creating CNI manager for "calico"
	I0114 12:05:53.004217    5748 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0114 12:05:53.008218    5748 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 12:05:53.008218    5748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I0114 12:05:53.103392    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 12:05:57.595704    5748 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (4.4922102s)
	I0114 12:05:57.595872    5748 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 12:05:57.610094    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=calico-114511 minikube.k8s.io/updated_at=2023_01_14T12_05_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:05:57.612103    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:05:57.675777    5748 ops.go:34] apiserver oom_adj: -16
	I0114 12:05:57.980801    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:05:59.094336    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:05:59.578645    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:00.078734    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:00.583651    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:01.091023    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:01.591724    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:02.077568    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:02.580558    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:03.085606    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:04.080479    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:07.951931    5748 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.8714119s)
	I0114 12:06:08.089988    5748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 12:06:10.097745    5748 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.0077356s)
	I0114 12:06:10.097745    5748 kubeadm.go:1067] duration metric: took 12.5016222s to wait for elevateKubeSystemPrivileges.
	I0114 12:06:10.097745    5748 kubeadm.go:398] StartCluster complete in 1m8.8841572s
	I0114 12:06:10.097745    5748 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:06:10.097745    5748 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 12:06:10.100600    5748 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 12:06:14.558164    5748 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-114511" rescaled to 1
	I0114 12:06:14.558951    5748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 12:06:14.558951    5748 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0114 12:06:14.558951    5748 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 12:06:14.559963    5748 out.go:177] * Verifying Kubernetes components...
	I0114 12:06:14.558951    5748 addons.go:65] Setting storage-provisioner=true in profile "calico-114511"
	I0114 12:06:14.558951    5748 addons.go:65] Setting default-storageclass=true in profile "calico-114511"
	I0114 12:06:14.559963    5748 config.go:180] Loaded profile config "calico-114511": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 12:06:14.559963    5748 addons.go:227] Setting addon storage-provisioner=true in "calico-114511"
	W0114 12:06:14.559963    5748 addons.go:236] addon storage-provisioner should already be in state true
	I0114 12:06:14.559963    5748 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-114511"
	I0114 12:06:14.559963    5748 host.go:66] Checking if "calico-114511" exists ...
	I0114 12:06:14.584056    5748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 12:06:14.591072    5748 cli_runner.go:164] Run: docker container inspect calico-114511 --format={{.State.Status}}
	I0114 12:06:14.592050    5748 cli_runner.go:164] Run: docker container inspect calico-114511 --format={{.State.Status}}
	I0114 12:06:14.858207    5748 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 12:06:14.861128    5748 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 12:06:14.861128    5748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 12:06:14.875132    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:06:14.958646    5748 addons.go:227] Setting addon default-storageclass=true in "calico-114511"
	W0114 12:06:14.958646    5748 addons.go:236] addon default-storageclass should already be in state true
	I0114 12:06:14.958646    5748 host.go:66] Checking if "calico-114511" exists ...
	I0114 12:06:14.992996    5748 cli_runner.go:164] Run: docker container inspect calico-114511 --format={{.State.Status}}
	I0114 12:06:15.105987    5748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50373 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa Username:docker}
	I0114 12:06:15.215034    5748 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 12:06:15.215034    5748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 12:06:15.222989    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-114511
	I0114 12:06:15.269627    5748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0114 12:06:15.288869    5748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-114511
	I0114 12:06:15.482235    5748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50373 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\calico-114511\id_rsa Username:docker}
	I0114 12:06:15.500240    5748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 12:06:15.547516    5748 node_ready.go:35] waiting up to 5m0s for node "calico-114511" to be "Ready" ...
	I0114 12:06:15.559536    5748 node_ready.go:49] node "calico-114511" has status "Ready":"True"
	I0114 12:06:15.559536    5748 node_ready.go:38] duration metric: took 12.0206ms waiting for node "calico-114511" to be "Ready" ...
	I0114 12:06:15.559536    5748 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 12:06:15.670635    5748 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace to be "Ready" ...
	I0114 12:06:16.191689    5748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 12:06:17.869531    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:19.880445    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:21.956803    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:22.359421    5748 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.0897189s)
	I0114 12:06:22.359421    5748 start.go:833] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0114 12:06:23.001425    5748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.5001148s)
	I0114 12:06:23.001425    5748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.8096643s)
	I0114 12:06:23.004429    5748 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 12:06:23.007415    5748 addons.go:488] enableAddons completed in 8.448376s
	I0114 12:06:24.365520    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:26.804296    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:29.300724    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:31.456886    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:33.464291    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:35.959830    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:38.368941    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:40.962339    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:43.386086    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:45.885038    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:48.354711    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:50.814270    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:53.373898    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:55.854525    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:06:57.871603    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:00.369338    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:02.854790    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:05.363895    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:07.800635    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:09.869694    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:12.289864    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:14.304331    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:16.379254    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:18.874282    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:20.958663    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:23.357860    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:25.373132    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:27.458541    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:29.869386    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:31.871858    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:33.875197    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:35.962227    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:38.368769    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:40.865335    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:42.870094    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:45.315613    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:47.360953    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:49.961244    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:52.296982    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:54.357487    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:56.798945    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:07:58.854888    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:00.864901    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:03.375546    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:05.873907    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:08.309372    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:10.858393    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:13.305045    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:15.957246    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:18.306429    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:20.368744    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:22.870148    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:25.456682    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:27.870003    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:30.355141    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:32.368870    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:34.874865    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:36.878088    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:39.372241    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:41.955447    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:44.304979    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:46.380592    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:48.871533    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:51.305876    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:53.867301    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:55.875217    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:08:58.301879    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:00.303385    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:02.856154    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:04.870400    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:06.871439    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:09.303094    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:11.808787    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:14.360840    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:16.869514    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:19.358591    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:21.874386    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:23.882377    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:26.376521    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:28.858827    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:31.296857    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:33.358751    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:35.373125    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:37.802538    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:39.872364    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:41.872933    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:44.356989    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:46.371448    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:48.377878    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:50.957566    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:53.372795    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:55.869781    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:09:58.310247    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:00.372902    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:02.795495    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:04.806517    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:07.303685    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:09.308974    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:11.868263    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:14.292424    5748 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:15.836943    5748 pod_ready.go:81] duration metric: took 4m0.1636196s waiting for pod "calico-kube-controllers-7df895d496-lrr86" in "kube-system" namespace to be "Ready" ...
	E0114 12:10:15.836943    5748 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0114 12:10:15.836943    5748 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-x2r5v" in "kube-system" namespace to be "Ready" ...
	I0114 12:10:17.899048    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:20.464986    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:22.958286    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:25.461768    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:27.900297    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:30.397686    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:32.892432    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:34.956501    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:36.957002    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:39.389513    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:41.397492    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:43.458298    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:45.469900    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:47.895394    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:49.899359    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:52.393730    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:54.401723    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:56.958211    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:10:59.464789    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:01.893861    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:03.894219    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:05.965486    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:08.458656    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:10.891559    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:12.895501    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:15.482533    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:17.886096    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:19.961602    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:26.299832    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:28.405190    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:30.888650    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:32.893910    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:35.478722    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:37.892021    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:39.896935    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:42.392137    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:44.907600    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:47.767415    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:49.890652    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:51.895567    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:53.899615    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:56.386135    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:11:58.390371    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:00.393881    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:02.890754    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:05.411345    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:07.971263    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:10.461473    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:12.883863    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:14.895004    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:16.895980    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:18.959120    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:20.974227    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:23.398435    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:25.959807    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:28.462703    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:30.963938    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:33.461106    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:35.979771    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:38.401300    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:40.968719    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:42.974052    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:45.472529    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:47.963803    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:50.392493    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:52.394198    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:54.884557    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:56.901350    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:12:59.463560    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:01.896792    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:04.462088    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:06.890962    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:08.906744    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:10.962155    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:12.962933    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:15.385857    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:17.390690    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:19.900871    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:22.388179    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:24.395952    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:26.398756    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:28.403678    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:30.894188    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:33.400171    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:35.462059    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:38.065281    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:40.405475    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:42.959280    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:45.463377    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:47.901649    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:51.091835    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:53.463776    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:55.886884    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:13:57.905298    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:00.402267    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:02.894838    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:05.459519    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:07.885431    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:09.897360    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:11.898770    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:14.388064    5748 pod_ready.go:102] pod "calico-node-x2r5v" in "kube-system" namespace has status "Ready":"False"
	I0114 12:14:15.970629    5748 pod_ready.go:81] duration metric: took 4m0.1311166s waiting for pod "calico-node-x2r5v" in "kube-system" namespace to be "Ready" ...
	E0114 12:14:15.970629    5748 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0114 12:14:15.970629    5748 pod_ready.go:38] duration metric: took 8m0.4059779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 12:14:15.973636    5748 out.go:177] 
	W0114 12:14:15.976927    5748 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0114 12:14:15.977002    5748 out.go:239] * 
	* 
	W0114 12:14:15.979249    5748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 12:14:15.989474    5748 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (613.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (341.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5867041s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default
E0114 12:11:56.960028    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:12:05.244392    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7540429s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5415292s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5538019s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6447403s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5642481s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 12:13:27.378330    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:13:31.432145    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5256119s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default
E0114 12:14:09.398029    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4526164s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5553186s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 12:15:00.729597    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5031159s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5726675s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-114509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5412452s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (341.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (330.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default
E0114 12:12:26.098291    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5903987s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5737274s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default
E0114 12:12:57.696534    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:13:03.616916    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5492235s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5562416s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5181233s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 12:13:41.576601    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6593369s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4907521s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (19.0987362s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5378229s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5567899s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-114507 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.541271s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (330.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (62.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5986082s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 12:16:02.897180    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
net_test.go:238: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5006584s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5208895s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.4889114s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 12:16:25.371764    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
net_test.go:238: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.4739087s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5542612s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 12:16:56.955666    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.4814042s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (62.07s)

                                                
                                    

Test pass (249/280)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.66
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.58
10 TestDownloadOnly/v1.25.3/json-events 8.01
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.46
16 TestDownloadOnly/DeleteAll 2.42
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.61
18 TestDownloadOnlyKic 35.66
19 TestBinaryMirror 4.27
20 TestOffline 176.7
22 TestAddons/Setup 453.82
26 TestAddons/parallel/MetricsServer 9.18
27 TestAddons/parallel/HelmTiller 32.98
29 TestAddons/parallel/CSI 86.89
30 TestAddons/parallel/Headlamp 35.58
31 TestAddons/parallel/CloudSpanner 9.11
34 TestAddons/serial/GCPAuth/Namespaces 0.5
35 TestAddons/StoppedEnableDisable 14.82
36 TestCertOptions 102.33
37 TestCertExpiration 335.32
38 TestDockerFlags 114.24
39 TestForceSystemdFlag 114.56
40 TestForceSystemdEnv 128.63
45 TestErrorSpam/setup 83.91
46 TestErrorSpam/start 6.06
47 TestErrorSpam/status 6.1
48 TestErrorSpam/pause 5.04
49 TestErrorSpam/unpause 5.66
50 TestErrorSpam/stop 16.7
53 TestFunctional/serial/CopySyncFile 0.03
54 TestFunctional/serial/StartWithProxy 98.4
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 53.62
57 TestFunctional/serial/KubeContext 0.17
58 TestFunctional/serial/KubectlGetPods 0.35
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.58
62 TestFunctional/serial/CacheCmd/cache/add_local 4.6
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.38
64 TestFunctional/serial/CacheCmd/cache/list 0.38
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.49
66 TestFunctional/serial/CacheCmd/cache/cache_reload 6.59
67 TestFunctional/serial/CacheCmd/cache/delete 0.74
68 TestFunctional/serial/MinikubeKubectlCmd 0.71
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.46
70 TestFunctional/serial/ExtraConfig 58.28
71 TestFunctional/serial/ComponentHealth 0.27
72 TestFunctional/serial/LogsCmd 3.46
73 TestFunctional/serial/LogsFileCmd 3.67
75 TestFunctional/parallel/ConfigCmd 2.53
77 TestFunctional/parallel/DryRun 4.14
78 TestFunctional/parallel/InternationalLanguage 1.95
79 TestFunctional/parallel/StatusCmd 7.49
84 TestFunctional/parallel/AddonsCmd 1.16
85 TestFunctional/parallel/PersistentVolumeClaim 65.48
87 TestFunctional/parallel/SSHCmd 3.66
88 TestFunctional/parallel/CpCmd 6.63
89 TestFunctional/parallel/MySQL 102.26
90 TestFunctional/parallel/FileSync 1.55
91 TestFunctional/parallel/CertSync 9.83
95 TestFunctional/parallel/NodeLabels 0.27
97 TestFunctional/parallel/NonActiveRuntimeDisabled 1.59
99 TestFunctional/parallel/License 2.4
100 TestFunctional/parallel/ProfileCmd/profile_not_create 2.23
101 TestFunctional/parallel/ProfileCmd/profile_list 2.39
102 TestFunctional/parallel/ProfileCmd/profile_json_output 2.46
104 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 29.06
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.27
112 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
113 TestFunctional/parallel/Version/short 0.5
114 TestFunctional/parallel/Version/components 3.16
115 TestFunctional/parallel/ImageCommands/ImageListShort 1.21
116 TestFunctional/parallel/ImageCommands/ImageListTable 1.3
117 TestFunctional/parallel/ImageCommands/ImageListJson 1.59
118 TestFunctional/parallel/ImageCommands/ImageListYaml 1.57
119 TestFunctional/parallel/ImageCommands/ImageBuild 15.28
120 TestFunctional/parallel/ImageCommands/Setup 4.18
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 10.16
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.41
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.31
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.41
125 TestFunctional/parallel/DockerEnv/powershell 7.49
126 TestFunctional/parallel/ImageCommands/ImageRemove 2.25
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 7.73
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.97
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 1
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.97
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.63
132 TestFunctional/delete_addon-resizer_images 0.02
133 TestFunctional/delete_my-image_image 0.01
134 TestFunctional/delete_minikube_cached_images 0.01
137 TestIngressAddonLegacy/StartLegacyK8sCluster 102.09
139 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 61.31
140 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 1.87
144 TestJSONOutput/start/Command 104.99
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 2.24
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 1.88
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 8.42
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 1.8
169 TestKicCustomNetwork/create_custom_network 88.27
170 TestKicCustomNetwork/use_default_bridge_network 85.68
171 TestKicExistingNetwork 86.48
172 TestKicCustomSubnet 89.57
173 TestKicStaticIP 91.08
174 TestMainNoArgs 0.37
175 TestMinikubeProfile 184.08
178 TestMountStart/serial/StartWithMountFirst 21.69
179 TestMountStart/serial/VerifyMountFirst 1.37
180 TestMountStart/serial/StartWithMountSecond 19.04
181 TestMountStart/serial/VerifyMountSecond 1.35
182 TestMountStart/serial/DeleteFirst 4.63
183 TestMountStart/serial/VerifyMountPostDelete 1.39
184 TestMountStart/serial/Stop 2.85
185 TestMountStart/serial/RestartStopped 13.9
186 TestMountStart/serial/VerifyMountPostStop 1.32
189 TestMultiNode/serial/FreshStart2Nodes 206.64
190 TestMultiNode/serial/DeployApp2Nodes 11.13
191 TestMultiNode/serial/PingHostFrom2Pods 3.75
192 TestMultiNode/serial/AddNode 62.13
193 TestMultiNode/serial/ProfileList 1.55
194 TestMultiNode/serial/CopyFile 50.2
195 TestMultiNode/serial/StopNode 8.04
196 TestMultiNode/serial/StartAfterStop 34.73
197 TestMultiNode/serial/RestartKeepsNodes 126.85
198 TestMultiNode/serial/DeleteNode 15.37
199 TestMultiNode/serial/StopMultiNode 26.8
200 TestMultiNode/serial/RestartMultiNode 116.97
201 TestMultiNode/serial/ValidateNameConflict 91.41
205 TestPreload 243.15
206 TestScheduledStopWindows 153.4
210 TestInsufficientStorage 55.02
211 TestRunningBinaryUpgrade 238.73
213 TestKubernetesUpgrade 344.61
214 TestMissingContainerUpgrade 296.37
216 TestNoKubernetes/serial/StartNoK8sWithVersion 0.54
217 TestStoppedBinaryUpgrade/Setup 0.73
218 TestNoKubernetes/serial/StartWithK8s 130.01
219 TestStoppedBinaryUpgrade/Upgrade 300.14
220 TestNoKubernetes/serial/StartWithStopK8s 34.52
221 TestNoKubernetes/serial/Start 26.96
222 TestNoKubernetes/serial/VerifyK8sNotRunning 1.59
223 TestNoKubernetes/serial/ProfileList 19.89
224 TestNoKubernetes/serial/Stop 3.04
225 TestNoKubernetes/serial/StartNoArgs 12.05
226 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.47
234 TestStoppedBinaryUpgrade/MinikubeLogs 3.66
247 TestPause/serial/Start 135.73
249 TestStartStop/group/old-k8s-version/serial/FirstStart 162.36
250 TestPause/serial/SecondStartNoReconfiguration 51.81
252 TestStartStop/group/no-preload/serial/FirstStart 154.62
253 TestPause/serial/Pause 2.23
254 TestPause/serial/VerifyStatus 1.57
255 TestPause/serial/Unpause 2
256 TestPause/serial/PauseAgain 2.8
257 TestPause/serial/DeletePaused 12.82
258 TestPause/serial/VerifyDeletedResources 18.7
260 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 106.33
261 TestStartStop/group/old-k8s-version/serial/DeployApp 13.19
262 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.38
263 TestStartStop/group/old-k8s-version/serial/Stop 13.35
265 TestStartStop/group/newest-cni/serial/FirstStart 97.35
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.34
267 TestStartStop/group/old-k8s-version/serial/SecondStart 443.62
268 TestStartStop/group/no-preload/serial/DeployApp 11.1
269 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.98
270 TestStartStop/group/no-preload/serial/Stop 13.31
271 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.38
272 TestStartStop/group/no-preload/serial/SecondStart 356.17
273 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.08
274 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.57
275 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.43
276 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.39
277 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 354.42
278 TestStartStop/group/newest-cni/serial/DeployApp 0
279 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.32
280 TestStartStop/group/newest-cni/serial/Stop 13.89
281 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.34
282 TestStartStop/group/newest-cni/serial/SecondStart 50.22
283 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
284 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
285 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 2.15
286 TestStartStop/group/newest-cni/serial/Pause 13.98
288 TestStartStop/group/embed-certs/serial/FirstStart 102.71
289 TestStartStop/group/embed-certs/serial/DeployApp 12.01
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.81
291 TestStartStop/group/embed-certs/serial/Stop 13.41
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.38
293 TestStartStop/group/embed-certs/serial/SecondStart 358.09
294 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 59.06
295 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 47.06
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 29.04
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.56
298 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.88
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.83
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 2.1
301 TestStartStop/group/old-k8s-version/serial/Pause 15.99
302 TestStartStop/group/no-preload/serial/Pause 16.42
303 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.82
304 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.93
305 TestStartStop/group/default-k8s-diff-port/serial/Pause 27.06
306 TestNetworkPlugins/group/auto/Start 116.11
307 TestNetworkPlugins/group/kindnet/Start 148.35
309 TestNetworkPlugins/group/auto/KubeletFlags 1.62
310 TestNetworkPlugins/group/auto/NetCatPod 41.05
311 TestNetworkPlugins/group/kindnet/ControllerPod 5.06
312 TestNetworkPlugins/group/auto/DNS 0.68
313 TestNetworkPlugins/group/auto/Localhost 0.62
314 TestNetworkPlugins/group/auto/HairPin 5.62
315 TestNetworkPlugins/group/kindnet/KubeletFlags 1.93
316 TestNetworkPlugins/group/kindnet/NetCatPod 47.97
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 66.07
319 TestNetworkPlugins/group/kindnet/DNS 1.34
320 TestNetworkPlugins/group/kindnet/Localhost 0.47
321 TestNetworkPlugins/group/kindnet/HairPin 0.58
322 TestNetworkPlugins/group/false/Start 370.29
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.67
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 2
325 TestStartStop/group/embed-certs/serial/Pause 17
326 TestNetworkPlugins/group/bridge/Start 366.16
327 TestNetworkPlugins/group/false/KubeletFlags 1.74
328 TestNetworkPlugins/group/false/NetCatPod 34.81
329 TestNetworkPlugins/group/enable-default-cni/Start 105.78
331 TestNetworkPlugins/group/bridge/KubeletFlags 1.49
332 TestNetworkPlugins/group/bridge/NetCatPod 27.99
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.54
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 26.72
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.54
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.58
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.58
339 TestNetworkPlugins/group/kubenet/Start 101.66
340 TestNetworkPlugins/group/kubenet/KubeletFlags 1.49
341 TestNetworkPlugins/group/kubenet/NetCatPod 26.05
342 TestNetworkPlugins/group/kubenet/DNS 0.6
343 TestNetworkPlugins/group/kubenet/Localhost 0.53
x
+
TestDownloadOnly/v1.16.0/json-events (10.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-100825 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-100825 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (10.6635857s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-100825
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-100825: exit status 85 (575.7391ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100825 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:08 GMT |          |
	|         | -p download-only-100825        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:08:26
	Running on machine: minikube2
	Binary: Built with gc go1.19.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:08:25.945848    8228 out.go:296] Setting OutFile to fd 684 ...
	I0114 10:08:26.002672    8228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:08:26.002672    8228 out.go:309] Setting ErrFile to fd 688...
	I0114 10:08:26.002672    8228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:08:26.012133    8228 root.go:311] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0114 10:08:26.023201    8228 out.go:303] Setting JSON to true
	I0114 10:08:26.026236    8228 start.go:125] hostinfo: {"hostname":"minikube2","uptime":2517,"bootTime":1673688389,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0114 10:08:26.026236    8228 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 10:08:26.054923    8228 out.go:97] [download-only-100825] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	W0114 10:08:26.056007    8228 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0114 10:08:26.056007    8228 notify.go:220] Checking for updates...
	I0114 10:08:26.058753    8228 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 10:08:26.060936    8228 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0114 10:08:26.063030    8228 out.go:169] MINIKUBE_LOCATION=15642
	I0114 10:08:26.066515    8228 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0114 10:08:26.070776    8228 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:08:26.071879    8228 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:08:26.381073    8228 docker.go:138] docker version: linux-20.10.21
	I0114 10:08:26.389763    8228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:08:26.996559    8228 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2023-01-14 10:08:26.547959 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:08:27.018767    8228 out.go:97] Using the docker driver based on user configuration
	I0114 10:08:27.018982    8228 start.go:294] selected driver: docker
	I0114 10:08:27.019093    8228 start.go:838] validating driver "docker" against <nil>
	I0114 10:08:27.032401    8228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:08:27.635793    8228 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2023-01-14 10:08:27.1845541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:08:27.636165    8228 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 10:08:27.755046    8228 start_flags.go:386] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0114 10:08:27.756096    8228 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 10:08:27.797479    8228 out.go:169] Using Docker Desktop driver with root privileges
	I0114 10:08:27.799825    8228 cni.go:95] Creating CNI manager for ""
	I0114 10:08:27.800282    8228 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 10:08:27.800415    8228 start_flags.go:319] config:
	{Name:download-only-100825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100825 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:08:27.803467    8228 out.go:97] Starting control plane node download-only-100825 in cluster download-only-100825
	I0114 10:08:27.803727    8228 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 10:08:27.805760    8228 out.go:97] Pulling base image ...
	I0114 10:08:27.805760    8228 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 10:08:27.805760    8228 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:08:27.850522    8228 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 10:08:27.850522    8228 cache.go:57] Caching tarball of preloaded images
	I0114 10:08:27.851077    8228 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 10:08:27.854380    8228 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0114 10:08:27.854461    8228 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0114 10:08:27.929990    8228 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 10:08:28.008058    8228 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 10:08:28.008254    8228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.36-1668787669-15272@sha256_06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar
	I0114 10:08:28.008871    8228 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.36-1668787669-15272@sha256_06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar
	I0114 10:08:28.008928    8228 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0114 10:08:28.010692    8228 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 10:08:31.949040    8228 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0114 10:08:31.951043    8228 preload.go:256] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0114 10:08:33.301930    8228 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0114 10:08:33.302585    8228 profile.go:148] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-100825\config.json ...
	I0114 10:08:33.302585    8228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-100825\config.json: {Name:mk3ca9d3f764aaeb9a06d351dedc0832aee0d520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:08:33.304778    8228 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 10:08:33.307381    8228 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	I0114 10:08:35.962642    8228 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c as a tarball
	I0114 10:08:35.962642    8228 cache.go:193] Successfully downloaded all kic artifacts
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100825"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (8.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-100825 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-100825 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker: (8.0094233s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (8.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-100825
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-100825: exit status 85 (461.7817ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100825 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:08 GMT |          |
	|         | -p download-only-100825        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-100825 | minikube2\jenkins | v1.28.0 | 14 Jan 23 10:08 GMT |          |
	|         | -p download-only-100825        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:08:37
	Running on machine: minikube2
	Binary: Built with gc go1.19.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:08:37.202033    8208 out.go:296] Setting OutFile to fd 752 ...
	I0114 10:08:37.264889    8208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:08:37.264889    8208 out.go:309] Setting ErrFile to fd 756...
	I0114 10:08:37.264889    8208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0114 10:08:37.274983    8208 root.go:311] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0114 10:08:37.305022    8208 out.go:303] Setting JSON to true
	I0114 10:08:37.320476    8208 start.go:125] hostinfo: {"hostname":"minikube2","uptime":2528,"bootTime":1673688389,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0114 10:08:37.320557    8208 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 10:08:37.624523    8208 out.go:97] [download-only-100825] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	I0114 10:08:37.625219    8208 notify.go:220] Checking for updates...
	I0114 10:08:37.629848    8208 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 10:08:37.632498    8208 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0114 10:08:37.634067    8208 out.go:169] MINIKUBE_LOCATION=15642
	I0114 10:08:37.638612    8208 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0114 10:08:37.643635    8208 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:08:37.644376    8208 config.go:180] Loaded profile config "download-only-100825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0114 10:08:37.644948    8208 start.go:746] api.Load failed for download-only-100825: filestore "download-only-100825": Docker machine "download-only-100825" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:08:37.645193    8208 driver.go:365] Setting default libvirt URI to qemu:///system
	W0114 10:08:37.645193    8208 start.go:746] api.Load failed for download-only-100825: filestore "download-only-100825": Docker machine "download-only-100825" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:08:37.906962    8208 docker.go:138] docker version: linux-20.10.21
	I0114 10:08:37.916487    8208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:08:38.481216    8208 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2023-01-14 10:08:38.0680004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:08:38.485130    8208 out.go:97] Using the docker driver based on existing profile
	I0114 10:08:38.485206    8208 start.go:294] selected driver: docker
	I0114 10:08:38.485206    8208 start.go:838] validating driver "docker" against &{Name:download-only-100825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100825 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:08:38.498275    8208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:08:39.077773    8208 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2023-01-14 10:08:38.6534212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:08:39.123887    8208 cni.go:95] Creating CNI manager for ""
	I0114 10:08:39.124087    8208 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 10:08:39.124087    8208 start_flags.go:319] config:
	{Name:download-only-100825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-100825 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run
/socket_vmnet StaticIP:}
	I0114 10:08:39.713747    8208 out.go:97] Starting control plane node download-only-100825 in cluster download-only-100825
	I0114 10:08:39.714064    8208 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 10:08:39.716688    8208 out.go:97] Pulling base image ...
	I0114 10:08:39.716688    8208 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 10:08:39.717403    8208 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 10:08:39.758367    8208 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 10:08:39.758367    8208 cache.go:57] Caching tarball of preloaded images
	I0114 10:08:39.758367    8208 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 10:08:39.762098    8208 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0114 10:08:39.762196    8208 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 10:08:39.824714    8208 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 10:08:39.915107    8208 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 10:08:39.915325    8208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.36-1668787669-15272@sha256_06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar
	I0114 10:08:39.915634    8208 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.36-1668787669-15272@sha256_06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c.tar
	I0114 10:08:39.915689    8208 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0114 10:08:39.915689    8208 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory, skipping pull
	I0114 10:08:39.915689    8208 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in cache, skipping pull
	I0114 10:08:39.915689    8208 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100825"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.46s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (2.42s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.4228798s)
--- PASS: TestDownloadOnly/DeleteAll (2.42s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.61s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-100825
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-100825: (1.6130941s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.61s)

                                                
                                    
x
+
TestDownloadOnlyKic (35.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-100851 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-100851 --force --alsologtostderr --driver=docker: (32.8987471s)
helpers_test.go:175: Cleaning up "download-docker-100851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-100851
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-100851: (1.6452389s)
--- PASS: TestDownloadOnlyKic (35.66s)

                                                
                                    
x
+
TestBinaryMirror (4.27s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-100927 --alsologtostderr --binary-mirror http://127.0.0.1:61760 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-100927 --alsologtostderr --binary-mirror http://127.0.0.1:61760 --driver=docker: (2.4305587s)
helpers_test.go:175: Cleaning up "binary-mirror-100927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-100927
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-100927: (1.6071823s)
--- PASS: TestBinaryMirror (4.27s)

                                                
                                    
x
+
TestOffline (176.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-113957 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-113957 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (2m47.2267296s)
helpers_test.go:175: Cleaning up "offline-docker-113957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-113957
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-113957: (9.4734338s)
--- PASS: TestOffline (176.70s)

                                                
                                    
x
+
TestAddons/Setup (453.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-100931 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-100931 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m33.8173636s)
--- PASS: TestAddons/Setup (453.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (9.18s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 36.4775ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-56c6cfbdd9-rmh2x" [056a394e-91ba-4c85-8cab-1115cfc4cd60] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0348847s
addons_test.go:372: (dbg) Run:  kubectl --context addons-100931 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:389: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-100931 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:389: (dbg) Done: out/minikube-windows-amd64.exe -p addons-100931 addons disable metrics-server --alsologtostderr -v=1: (3.8158748s)
--- PASS: TestAddons/parallel/MetricsServer (9.18s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (32.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 36.6251ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-696b5bfbb7-9rlxc" [d8fe202b-4035-45e5-8422-b08ca11b1c8e] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.03292s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Run:  kubectl --context addons-100931 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-100931 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (24.6724092s)
addons_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-100931 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p addons-100931 addons disable helm-tiller --alsologtostderr -v=1: (3.2178718s)
--- PASS: TestAddons/parallel/HelmTiller (32.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (86.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 58.267ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-100931 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100931 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-100931 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [2692a13e-f828-43f1-8a3e-b6d8f313ebb9] Pending
helpers_test.go:342: "task-pv-pod" [2692a13e-f828-43f1-8a3e-b6d8f313ebb9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [2692a13e-f828-43f1-8a3e-b6d8f313ebb9] Running
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 43.0806351s
addons_test.go:541: (dbg) Run:  kubectl --context addons-100931 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100931 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100931 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-100931 delete pod task-pv-pod
addons_test.go:551: (dbg) Done: kubectl --context addons-100931 delete pod task-pv-pod: (2.2544056s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-100931 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-100931 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:563: (dbg) Done: kubectl --context addons-100931 create -f testdata\csi-hostpath-driver\pvc-restore.yaml: (1.0808097s)
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100931 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-100931 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [7f35fe25-f249-4b22-9e46-22ec802344c0] Pending
helpers_test.go:342: "task-pv-pod-restore" [7f35fe25-f249-4b22-9e46-22ec802344c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [7f35fe25-f249-4b22-9e46-22ec802344c0] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 19.0473129s
addons_test.go:583: (dbg) Run:  kubectl --context addons-100931 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Done: kubectl --context addons-100931 delete pod task-pv-pod-restore: (1.4614466s)
addons_test.go:587: (dbg) Run:  kubectl --context addons-100931 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-100931 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-100931 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-windows-amd64.exe -p addons-100931 addons disable csi-hostpath-driver --alsologtostderr -v=1: (11.0966743s)
addons_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-100931 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:599: (dbg) Done: out/minikube-windows-amd64.exe -p addons-100931 addons disable volumesnapshots --alsologtostderr -v=1: (2.2135756s)
--- PASS: TestAddons/parallel/CSI (86.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-100931 --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-100931 --alsologtostderr -v=1: (4.5148397s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-xsdh9" [c41e7daf-d05c-4c39-b279-7f723d7b99a3] Pending
helpers_test.go:342: "headlamp-764769c887-xsdh9" [c41e7daf-d05c-4c39-b279-7f723d7b99a3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-xsdh9" [c41e7daf-d05c-4c39-b279-7f723d7b99a3] Running
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 31.0607441s
--- PASS: TestAddons/parallel/Headlamp (35.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (9.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-4578z" [4e00cb74-5f9d-4c26-a23b-5aa71f439281] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.1939036s
addons_test.go:798: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-100931

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:798: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-100931: (3.8536794s)
--- PASS: TestAddons/parallel/CloudSpanner (9.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-100931 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-100931 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.82s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-100931
addons_test.go:139: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-100931: (13.6361399s)
addons_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-100931
addons_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-100931
--- PASS: TestAddons/StoppedEnableDisable (14.82s)

                                                
                                    
x
+
TestCertOptions (102.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-114838 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-114838 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m32.0989777s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-114838 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-114838 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.5960649s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-114838 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-114838 -- "sudo cat /etc/kubernetes/admin.conf": (1.5688114s)
helpers_test.go:175: Cleaning up "cert-options-114838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-114838
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-114838: (6.8167691s)
--- PASS: TestCertOptions (102.33s)

                                                
                                    
x
+
TestCertExpiration (335.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-114648 --memory=2048 --cert-expiration=3m --driver=docker
E0114 11:47:05.230863    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-114648 --memory=2048 --cert-expiration=3m --driver=docker: (1m37.7499548s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-114648 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-114648 --memory=2048 --cert-expiration=8760h --driver=docker: (45.4093058s)
helpers_test.go:175: Cleaning up "cert-expiration-114648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-114648

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-114648: (12.1480792s)
--- PASS: TestCertExpiration (335.32s)

                                                
                                    
x
+
TestDockerFlags (114.24s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-114719 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-114719 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m41.2997669s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-114719 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-114719 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.7292208s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-114719 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-114719 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.8520495s)
helpers_test.go:175: Cleaning up "docker-flags-114719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-114719
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-114719: (9.3537515s)
--- PASS: TestDockerFlags (114.24s)

                                                
                                    
x
+
TestForceSystemdFlag (114.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-114453 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-114453 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m42.6829582s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-114453 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-114453 ssh "docker info --format {{.CgroupDriver}}": (1.5921457s)
helpers_test.go:175: Cleaning up "force-systemd-flag-114453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-114453
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-114453: (10.2807596s)
--- PASS: TestForceSystemdFlag (114.56s)

                                                
                                    
x
+
TestForceSystemdEnv (128.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-114511 --memory=2048 --alsologtostderr -v=5 --driver=docker
E0114 11:46:02.884874    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-114511 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m55.7444132s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-114511 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-114511 ssh "docker info --format {{.CgroupDriver}}": (1.713167s)
helpers_test.go:175: Cleaning up "force-systemd-env-114511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-114511
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-114511: (11.176065s)
--- PASS: TestForceSystemdEnv (128.63s)

                                                
                                    
x
+
TestErrorSpam/setup (83.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-101950 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-101950 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 --driver=docker: (1m23.9105648s)
error_spam_test.go:91: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.25.3."
--- PASS: TestErrorSpam/setup (83.91s)

                                                
                                    
x
+
TestErrorSpam/start (6.06s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 start --dry-run: (2.042073s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 start --dry-run: (1.9606298s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 start --dry-run: (2.052746s)
--- PASS: TestErrorSpam/start (6.06s)

                                                
                                    
x
+
TestErrorSpam/status (6.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 status: (1.8258587s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 status: (2.4966749s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 status: (1.7715972s)
--- PASS: TestErrorSpam/status (6.10s)

                                                
                                    
x
+
TestErrorSpam/pause (5.04s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 pause: (2.0521756s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 pause: (1.5166246s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 pause: (1.4693051s)
--- PASS: TestErrorSpam/pause (5.04s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 unpause: (1.8794901s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 unpause: (2.1178932s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 unpause: (1.656785s)
--- PASS: TestErrorSpam/unpause (5.66s)

                                                
                                    
x
+
TestErrorSpam/stop (16.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 stop: (7.9845723s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 stop: (4.3421397s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-101950 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-101950 stop: (4.3735144s)
--- PASS: TestErrorSpam/stop (16.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\9968\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-102159 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0114 10:22:05.188891    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:05.204630    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:05.220052    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:05.251893    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:05.297980    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:05.391910    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:05.565046    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:05.896313    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:06.541868    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:07.833134    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:10.396857    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:15.518297    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:25.767798    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:22:46.250417    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:23:27.216561    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
functional_test.go:2161: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-102159 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m38.3916719s)
--- PASS: TestFunctional/serial/StartWithProxy (98.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.62s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-102159 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-102159 --alsologtostderr -v=8: (53.6209005s)
functional_test.go:656: soft start took 53.6216143s for "functional-102159" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.62s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.17s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-102159 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 cache add k8s.gcr.io/pause:3.1: (3.5335814s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 cache add k8s.gcr.io/pause:3.3: (2.5133968s)
functional_test.go:1042: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 cache add k8s.gcr.io/pause:latest: (2.5310853s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-102159 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1398962186\001
functional_test.go:1070: (dbg) Done: docker build -t minikube-local-cache-test:functional-102159 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1398962186\001: (1.8037368s)
functional_test.go:1082: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cache add minikube-local-cache-test:functional-102159
functional_test.go:1082: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 cache add minikube-local-cache-test:functional-102159: (2.2012256s)
functional_test.go:1087: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cache delete minikube-local-cache-test:functional-102159
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-102159
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh sudo crictl images
functional_test.go:1117: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh sudo crictl images: (1.4858209s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (6.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1140: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh sudo docker rmi k8s.gcr.io/pause:latest: (1.4659173s)
functional_test.go:1146: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
E0114 10:24:49.146641    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-102159 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (1.3853689s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 cache reload: (2.3481583s)
functional_test.go:1156: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1156: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (1.3935163s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (6.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 kubectl -- --context functional-102159 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.71s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out\kubectl.exe --context functional-102159 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.46s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-102159 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-102159 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.2796969s)
functional_test.go:754: restart took 58.280154s for "functional-102159" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (58.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-102159 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 logs
functional_test.go:1229: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 logs: (3.4620725s)
--- PASS: TestFunctional/serial/LogsCmd (3.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2804249446\001\logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2804249446\001\logs.txt: (3.6638077s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-102159 config get cpus: exit status 14 (429.9988ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-102159 config get cpus: exit status 14 (416.0782ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-102159 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:967: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-102159 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.8251165s)

                                                
                                                
-- stdout --
	* [functional-102159] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:26:11.194420    9656 out.go:296] Setting OutFile to fd 752 ...
	I0114 10:26:11.307794    9656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:26:11.307794    9656 out.go:309] Setting ErrFile to fd 988...
	I0114 10:26:11.307794    9656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:26:11.339782    9656 out.go:303] Setting JSON to false
	I0114 10:26:11.343797    9656 start.go:125] hostinfo: {"hostname":"minikube2","uptime":3582,"bootTime":1673688389,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0114 10:26:11.343797    9656 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 10:26:11.349783    9656 out.go:177] * [functional-102159] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	I0114 10:26:11.351799    9656 notify.go:220] Checking for updates...
	I0114 10:26:11.354778    9656 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 10:26:11.357788    9656 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0114 10:26:11.360794    9656 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:26:11.363782    9656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:26:11.372784    9656 config.go:180] Loaded profile config "functional-102159": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:26:11.373783    9656 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:26:11.800025    9656 docker.go:138] docker version: linux-20.10.21
	I0114 10:26:11.817037    9656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:26:12.562507    9656 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2023-01-14 10:26:12.0295441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plug
ins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:26:12.566501    9656 out.go:177] * Using the docker driver based on existing profile
	I0114 10:26:12.568486    9656 start.go:294] selected driver: docker
	I0114 10:26:12.568486    9656 start.go:838] validating driver "docker" against &{Name:functional-102159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-102159 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:26:12.569490    9656 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:26:12.633322    9656 out.go:177] 
	W0114 10:26:12.636167    9656 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0114 10:26:12.638984    9656 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-102159 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:984: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-102159 --dry-run --alsologtostderr -v=1 --driver=docker: (2.3100821s)
--- PASS: TestFunctional/parallel/DryRun (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-102159 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-102159 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.9521133s)

                                                
                                                
-- stdout --
	* [functional-102159] minikube v1.28.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:26:09.230072    6616 out.go:296] Setting OutFile to fd 928 ...
	I0114 10:26:09.323069    6616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:26:09.323069    6616 out.go:309] Setting ErrFile to fd 972...
	I0114 10:26:09.323069    6616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:26:09.349049    6616 out.go:303] Setting JSON to false
	I0114 10:26:09.359097    6616 start.go:125] hostinfo: {"hostname":"minikube2","uptime":3580,"bootTime":1673688389,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2486 Build 19045.2486","kernelVersion":"10.0.19045.2486 Build 19045.2486","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0114 10:26:09.359097    6616 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 10:26:09.364052    6616 out.go:177] * [functional-102159] minikube v1.28.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	I0114 10:26:09.370050    6616 notify.go:220] Checking for updates...
	I0114 10:26:09.372091    6616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0114 10:26:09.375061    6616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0114 10:26:09.378055    6616 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:26:09.380059    6616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:26:09.384068    6616 config.go:180] Loaded profile config "functional-102159": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:26:09.385062    6616 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:26:09.829056    6616 docker.go:138] docker version: linux-20.10.21
	I0114 10:26:09.839062    6616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 10:26:10.592051    6616 info.go:266] docker info: {ID:3UBZ:WVUB:HLKZ:K7PJ:5MPF:INSH:AFXZ:DVGD:NGYR:ZXOC:2S5U:CWJN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:53 SystemTime:2023-01-14 10:26:10.018925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:C:\Program Files\Docker\cli-plugi
ns\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 10:26:10.597055    6616 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0114 10:26:10.600055    6616 start.go:294] selected driver: docker
	I0114 10:26:10.600055    6616 start.go:838] validating driver "docker" against &{Name:functional-102159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-102159 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:26:10.600055    6616 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:26:10.749064    6616 out.go:177] 
	W0114 10:26:10.753069    6616 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0114 10:26:10.768062    6616 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (7.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 status: (2.3878794s)
functional_test.go:853: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (2.5849474s)
functional_test.go:865: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 status -o json: (2.5210632s)
--- PASS: TestFunctional/parallel/StatusCmd (7.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (65.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [3eb6b3f4-85a9-4a58-a976-90ae47003eac] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.1005031s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-102159 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-102159 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-102159 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-102159 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-102159 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [b8420616-b5fd-42c0-b7a9-c83030278ce5] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b8420616-b5fd-42c0-b7a9-c83030278ce5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b8420616-b5fd-42c0-b7a9-c83030278ce5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 40.0921678s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-102159 exec sp-pod -- touch /tmp/mount/foo
E0114 10:27:05.183274    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-102159 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-102159 delete -f testdata/storage-provisioner/pod.yaml: (3.5005437s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-102159 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [836f5e25-6f77-4207-bf2f-01a2a8b4de80] Pending
helpers_test.go:342: "sp-pod" [836f5e25-6f77-4207-bf2f-01a2a8b4de80] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [836f5e25-6f77-4207-bf2f-01a2a8b4de80] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0736836s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-102159 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (65.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "echo hello": (1.7968249s)
functional_test.go:1672: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "cat /etc/hostname": (1.8586699s)
--- PASS: TestFunctional/parallel/SSHCmd (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.3034303s)

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh -n functional-102159 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh -n functional-102159 "sudo cat /home/docker/cp-test.txt": (1.5939439s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 cp functional-102159:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1838005831\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 cp functional-102159:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd1838005831\001\cp-test.txt: (1.7799916s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh -n functional-102159 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh -n functional-102159 "sudo cat /home/docker/cp-test.txt": (1.9524178s)
--- PASS: TestFunctional/parallel/CpCmd (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (102.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-102159 replace --force -f testdata\mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-r5qcr" [49328d8a-bf22-466e-b722-c9d1061506d0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-r5qcr" [49328d8a-bf22-466e-b722-c9d1061506d0] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m20.0581512s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;": exit status 1 (606.7172ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;": exit status 1 (474.8518ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;": exit status 1 (495.9558ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;": exit status 1 (655.5838ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;": exit status 1 (658.5118ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;": exit status 1 (546.9973ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102159 exec mysql-596b7fcdbf-r5qcr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (102.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/9968/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/test/nested/copy/9968/hosts"
functional_test.go:1858: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/test/nested/copy/9968/hosts": (1.5498102s)
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (9.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/9968.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/9968.pem"
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/9968.pem": (1.5007293s)
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/9968.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /usr/share/ca-certificates/9968.pem"
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /usr/share/ca-certificates/9968.pem": (1.4320252s)
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1900: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.8887207s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/99682.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/99682.pem"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/99682.pem": (1.7803434s)
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/99682.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /usr/share/ca-certificates/99682.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /usr/share/ca-certificates/99682.pem": (1.7142372s)
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.5164416s)
--- PASS: TestFunctional/parallel/CertSync (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-102159 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-102159 ssh "sudo systemctl is-active crio": exit status 1 (1.5932315s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2215: (dbg) Done: out/minikube-windows-amd64.exe license: (2.3858487s)
--- PASS: TestFunctional/parallel/License (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5443127s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.8808935s)
functional_test.go:1311: Took "1.8808935s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "508.034ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.9853943s)
functional_test.go:1362: Took "1.9853943s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1375: Took "479.1275ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-102159 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (29.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-102159 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [656e8716-4689-4d01-b6cc-ac6b8cd78805] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [656e8716-4689-4d01-b6cc-ac6b8cd78805] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 28.1629391s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (29.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-102159 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-102159 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 7560: TerminateProcess: Access is denied.
helpers_test.go:506: unable to kill pid 6244: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 version --short
--- PASS: TestFunctional/parallel/Version/short (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 version -o=json --components
functional_test.go:2197: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 version -o=json --components: (3.1610012s)
--- PASS: TestFunctional/parallel/Version/components (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls --format short
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls --format short: (1.2107808s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-102159 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-102159
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-102159
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls --format table
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls --format table: (1.295758s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-102159 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-102159 | 1862325235bdb | 30B    |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| gcr.io/google-containers/addon-resizer      | functional-102159 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls --format json
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls --format json: (1.5916025s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-102159 image ls --format json:
[{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"aca5d481ccd1280b2a4616295d50550ed8e4b3ec8fbf9c76c47eab75a3561df5","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-102159"],"size":"1240000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a",
"repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"1862325235bdb7df34544ed0eb8886870c8d4c39dec65cc89a8997aad77cc6ec","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-102159"],"size":"30"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],
"size":"48800000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-102159"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls --format yaml
functional_test.go:257: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls --format yaml: (1.5732035s)
functional_test.go:262: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-102159 image ls --format yaml:
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-102159
size: "32900000"
- id: 1862325235bdb7df34544ed0eb8886870c8d4c39dec65cc89a8997aad77cc6ec
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-102159
size: "30"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (15.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-102159 ssh pgrep buildkitd: exit status 1 (1.6653653s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image build -t localhost/my-image:functional-102159 testdata\build
functional_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image build -t localhost/my-image:functional-102159 testdata\build: (12.2040692s)
functional_test.go:316: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-102159 image build -t localhost/my-image:functional-102159 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in afa9aaf7d7fd
Removing intermediate container afa9aaf7d7fd
---> b382423136f2
Step 3/3 : ADD content.txt /
---> aca5d481ccd1
Successfully built aca5d481ccd1
Successfully tagged localhost/my-image:functional-102159
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls: (1.4145547s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (15.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.9034177s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-102159
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image load --daemon gcr.io/google-containers/addon-resizer:functional-102159

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image load --daemon gcr.io/google-containers/addon-resizer:functional-102159: (9.1330007s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls: (1.0223047s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image load --daemon gcr.io/google-containers/addon-resizer:functional-102159

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image load --daemon gcr.io/google-containers/addon-resizer:functional-102159: (5.1948984s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls: (1.2129436s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.526937s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-102159
functional_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image load --daemon gcr.io/google-containers/addon-resizer:functional-102159

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image load --daemon gcr.io/google-containers/addon-resizer:functional-102159: (10.4091855s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls: (1.1207921s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image save gcr.io/google-containers/addon-resizer:functional-102159 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image save gcr.io/google-containers/addon-resizer:functional-102159 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (5.4140347s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:492: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-102159 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-102159"
E0114 10:27:33.000842    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
functional_test.go:492: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-102159 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-102159": (4.6918911s)
functional_test.go:515: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-102159 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:515: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-102159 docker-env | Invoke-Expression ; docker images": (2.7950431s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (7.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image rm gcr.io/google-containers/addon-resizer:functional-102159
functional_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image rm gcr.io/google-containers/addon-resizer:functional-102159: (1.1326931s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls: (1.1182849s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (6.6082513s)
functional_test.go:444: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image ls
functional_test.go:444: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image ls: (1.1262413s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-102159
functional_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-102159 image save --daemon gcr.io/google-containers/addon-resizer:functional-102159
functional_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p functional-102159 image save --daemon gcr.io/google-containers/addon-resizer:functional-102159: (9.1165092s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-102159
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:188: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-102159
functional_test.go:186: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-102159: context deadline exceeded (0s)
functional_test.go:188: failed to remove image "gcr.io/google-containers/addon-resizer:functional-102159" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-102159": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-102159
functional_test.go:194: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-102159: context deadline exceeded (0s)
functional_test.go:196: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-102159": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-102159
functional_test.go:202: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-102159: context deadline exceeded (0s)
functional_test.go:204: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-102159": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (102.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-110215 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-110215 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (1m42.0865368s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (102.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (61.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-110215 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-110215 addons enable ingress --alsologtostderr -v=5: (1m1.3102871s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (61.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-110215 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-110215 addons enable ingress-dns --alsologtostderr -v=5: (1.8713269s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (104.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-110554 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0114 11:06:02.856269    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:02.872038    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:02.887516    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:02.918520    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:02.966489    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:03.060932    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:03.231722    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:03.557214    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:04.202687    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:05.483252    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:08.058881    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:13.187202    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:23.430542    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:06:43.926336    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:07:05.207628    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 11:07:24.896652    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-110554 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m44.9862618s)
--- PASS: TestJSONOutput/start/Command (104.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.24s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-110554 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-110554 --output=json --user=testUser: (2.237886s)
--- PASS: TestJSONOutput/pause/Command (2.24s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-110554 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-110554 --output=json --user=testUser: (1.8841345s)
--- PASS: TestJSONOutput/unpause/Command (1.88s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-110554 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-110554 --output=json --user=testUser: (8.4147475s)
--- PASS: TestJSONOutput/stop/Command (8.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.8s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-110756 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-110756 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (391.1627ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2df748ad-a001-4761-86d6-2bb1115056ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-110756] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ebcb001-7313-4990-b68a-a3dd9fe1b537","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f22ddb3a-8750-4882-991c-160cc63e9388","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"02edd53b-80f1-4ce3-9f5d-946c482594d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"9b868b9e-db35-487d-993f-6dd23ae019d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"106f28bd-a2ce-4b0f-aaa5-28666d8270cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-110756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-110756
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-110756: (1.4037999s)
--- PASS: TestErrorJSONOutput (1.80s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (88.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-110758 --network=
E0114 11:08:46.819545    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-110758 --network=: (1m22.0834427s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-110758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-110758
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-110758: (5.9630824s)
--- PASS: TestKicCustomNetwork/create_custom_network (88.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (85.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-110927 --network=bridge
E0114 11:10:00.687035    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:00.702380    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:00.717550    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:00.748536    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:00.795319    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:00.889175    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:01.061369    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:01.389871    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:02.041404    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:04.792137    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:07.358006    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:12.491400    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:22.733170    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:10:43.222727    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-110927 --network=bridge: (1m20.2209682s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-110927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-110927
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-110927: (5.2607989s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (85.68s)

                                                
                                    
x
+
TestKicExistingNetwork (86.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-111053 --network=existing-network
E0114 11:11:02.871983    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:11:24.183757    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:11:30.672074    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:11:48.403676    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 11:12:05.213586    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-111053 --network=existing-network: (1m19.8555002s)
helpers_test.go:175: Cleaning up "existing-network-111053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-111053
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-111053: (5.346927s)
--- PASS: TestKicExistingNetwork (86.48s)

                                                
                                    
x
+
TestKicCustomSubnet (89.57s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-111219 --subnet=192.168.60.0/24
E0114 11:12:46.112116    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-111219 --subnet=192.168.60.0/24: (1m23.5240393s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-111219 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-111219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-111219
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-111219: (5.839873s)
--- PASS: TestKicCustomSubnet (89.57s)

                                                
                                    
x
+
TestKicStaticIP (91.08s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-111348 --static-ip=192.168.200.200
E0114 11:15:00.685163    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-111348 --static-ip=192.168.200.200: (1m24.1367294s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-111348 ip
helpers_test.go:175: Cleaning up "static-ip-111348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-111348
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-111348: (6.1052716s)
--- PASS: TestKicStaticIP (91.08s)

                                                
                                    
x
+
TestMainNoArgs (0.37s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.37s)

                                                
                                    
x
+
TestMinikubeProfile (184.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-111520 --driver=docker
E0114 11:15:29.954415    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:16:02.861567    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-111520 --driver=docker: (1m20.4831606s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-111520 --driver=docker
E0114 11:17:05.204904    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-111520 --driver=docker: (1m22.4357193s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-111520
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.6871377s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-111520
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (3.3010376s)
helpers_test.go:175: Cleaning up "second-111520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-111520
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-111520: (7.9876151s)
helpers_test.go:175: Cleaning up "first-111520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-111520
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-111520: (5.9442106s)
--- PASS: TestMinikubeProfile (184.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-111824 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-111824 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (20.6740265s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-111824 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-111824 ssh -- ls /minikube-host: (1.373343s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-111824 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-111824 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (18.0413785s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-111824 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-111824 ssh -- ls /minikube-host: (1.3460019s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (4.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-111824 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-111824 --alsologtostderr -v=5: (4.6268734s)
--- PASS: TestMountStart/serial/DeleteFirst (4.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-111824 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-111824 ssh -- ls /minikube-host: (1.3927079s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.85s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-111824
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-111824: (2.8513716s)
--- PASS: TestMountStart/serial/Stop (2.85s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (13.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-111824
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-111824: (12.8844168s)
--- PASS: TestMountStart/serial/RestartStopped (13.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-111824 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-111824 ssh -- ls /minikube-host: (1.3202863s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (206.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-111937 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0114 11:20:00.692823    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:21:02.864199    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:22:05.221172    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 11:22:26.043117    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-111937 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (3m24.2547818s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr: (2.388866s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (206.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (11.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- rollout status deployment/busybox: (3.5794656s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-5llqm -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-zsp29 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-5llqm -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-5llqm -- nslookup kubernetes.default: (1.0062143s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-zsp29 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-5llqm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-zsp29 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (11.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-5llqm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-5llqm -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-zsp29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-111937 -- exec busybox-65db55d5d6-zsp29 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (3.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (62.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-111937 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-111937 -v 3 --alsologtostderr: (58.2423409s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr: (3.8843649s)
--- PASS: TestMultiNode/serial/AddNode (62.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5526916s)
--- PASS: TestMultiNode/serial/ProfileList (1.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (50.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 status --output json --alsologtostderr: (3.364179s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp testdata\cp-test.txt multinode-111937:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp testdata\cp-test.txt multinode-111937:/home/docker/cp-test.txt: (1.4256247s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt": (1.4495844s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile4028405554\001\cp-test_multinode-111937.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile4028405554\001\cp-test_multinode-111937.txt: (1.4468454s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt": (1.4049048s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937:/home/docker/cp-test.txt multinode-111937-m02:/home/docker/cp-test_multinode-111937_multinode-111937-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937:/home/docker/cp-test.txt multinode-111937-m02:/home/docker/cp-test_multinode-111937_multinode-111937-m02.txt: (2.0039301s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt": (1.4350837s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test_multinode-111937_multinode-111937-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test_multinode-111937_multinode-111937-m02.txt": (1.5058932s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937:/home/docker/cp-test.txt multinode-111937-m03:/home/docker/cp-test_multinode-111937_multinode-111937-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937:/home/docker/cp-test.txt multinode-111937-m03:/home/docker/cp-test_multinode-111937_multinode-111937-m03.txt: (2.0187042s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test.txt": (1.5105805s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test_multinode-111937_multinode-111937-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test_multinode-111937_multinode-111937-m03.txt": (1.4496392s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp testdata\cp-test.txt multinode-111937-m02:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp testdata\cp-test.txt multinode-111937-m02:/home/docker/cp-test.txt: (1.4578929s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt": (1.4092179s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile4028405554\001\cp-test_multinode-111937-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile4028405554\001\cp-test_multinode-111937-m02.txt: (1.4337903s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt": (1.3858543s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m02:/home/docker/cp-test.txt multinode-111937:/home/docker/cp-test_multinode-111937-m02_multinode-111937.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m02:/home/docker/cp-test.txt multinode-111937:/home/docker/cp-test_multinode-111937-m02_multinode-111937.txt: (2.053129s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt": (1.3688774s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test_multinode-111937-m02_multinode-111937.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test_multinode-111937-m02_multinode-111937.txt": (1.5945018s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m02:/home/docker/cp-test.txt multinode-111937-m03:/home/docker/cp-test_multinode-111937-m02_multinode-111937-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m02:/home/docker/cp-test.txt multinode-111937-m03:/home/docker/cp-test_multinode-111937-m02_multinode-111937-m03.txt: (1.9843256s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test.txt": (1.4411776s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test_multinode-111937-m02_multinode-111937-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test_multinode-111937-m02_multinode-111937-m03.txt": (1.4333791s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp testdata\cp-test.txt multinode-111937-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp testdata\cp-test.txt multinode-111937-m03:/home/docker/cp-test.txt: (1.4066621s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt": (1.3605571s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile4028405554\001\cp-test_multinode-111937-m03.txt
E0114 11:25:00.699834    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile4028405554\001\cp-test_multinode-111937-m03.txt: (1.411596s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt": (1.4740655s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m03:/home/docker/cp-test.txt multinode-111937:/home/docker/cp-test_multinode-111937-m03_multinode-111937.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m03:/home/docker/cp-test.txt multinode-111937:/home/docker/cp-test_multinode-111937-m03_multinode-111937.txt: (2.1579908s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt": (1.4349203s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test_multinode-111937-m03_multinode-111937.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937 "sudo cat /home/docker/cp-test_multinode-111937-m03_multinode-111937.txt": (1.4510439s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m03:/home/docker/cp-test.txt multinode-111937-m02:/home/docker/cp-test_multinode-111937-m03_multinode-111937-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 cp multinode-111937-m03:/home/docker/cp-test.txt multinode-111937-m02:/home/docker/cp-test_multinode-111937-m03_multinode-111937-m02.txt: (2.0130181s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m03 "sudo cat /home/docker/cp-test.txt": (1.4337142s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test_multinode-111937-m03_multinode-111937-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 ssh -n multinode-111937-m02 "sudo cat /home/docker/cp-test_multinode-111937-m03_multinode-111937-m02.txt": (1.4642868s)
--- PASS: TestMultiNode/serial/CopyFile (50.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (8.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 node stop m03: (2.7314443s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-111937 status: exit status 7 (2.709906s)

                                                
                                                
-- stdout --
	multinode-111937
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-111937-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-111937-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr: exit status 7 (2.6031166s)

                                                
                                                
-- stdout --
	multinode-111937
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-111937-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-111937-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 11:25:18.920402    7644 out.go:296] Setting OutFile to fd 972 ...
	I0114 11:25:18.982052    7644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:25:18.982052    7644 out.go:309] Setting ErrFile to fd 996...
	I0114 11:25:18.982052    7644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:25:18.993141    7644 out.go:303] Setting JSON to false
	I0114 11:25:18.993141    7644 mustload.go:65] Loading cluster: multinode-111937
	I0114 11:25:18.994070    7644 notify.go:220] Checking for updates...
	I0114 11:25:18.994070    7644 config.go:180] Loaded profile config "multinode-111937": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 11:25:18.994070    7644 status.go:255] checking status of multinode-111937 ...
	I0114 11:25:19.009054    7644 cli_runner.go:164] Run: docker container inspect multinode-111937 --format={{.State.Status}}
	I0114 11:25:19.194370    7644 status.go:330] multinode-111937 host status = "Running" (err=<nil>)
	I0114 11:25:19.194370    7644 host.go:66] Checking if "multinode-111937" exists ...
	I0114 11:25:19.201376    7644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-111937
	I0114 11:25:19.398961    7644 host.go:66] Checking if "multinode-111937" exists ...
	I0114 11:25:19.409832    7644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 11:25:19.416609    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-111937
	I0114 11:25:19.617981    7644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63739 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-111937\id_rsa Username:docker}
	I0114 11:25:19.760242    7644 ssh_runner.go:195] Run: systemctl --version
	I0114 11:25:19.784244    7644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 11:25:19.820904    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-111937
	I0114 11:25:20.026740    7644 kubeconfig.go:92] found "multinode-111937" server: "https://127.0.0.1:63738"
	I0114 11:25:20.026790    7644 api_server.go:165] Checking apiserver status ...
	I0114 11:25:20.040480    7644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 11:25:20.094474    7644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1792/cgroup
	I0114 11:25:20.120765    7644 api_server.go:181] apiserver freezer: "20:freezer:/docker/4ad3469eb2672fc5e55d2e385c84184d36f5948df7753262ca5918cfa70bddec/kubepods/burstable/pod8b8cd3ad24ffe5e81425700a3cce8912/d7380a2330844e0d58ff60103b940032e84cf2d23b4fd019a49a5c5cf5ec8b11"
	I0114 11:25:20.129785    7644 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4ad3469eb2672fc5e55d2e385c84184d36f5948df7753262ca5918cfa70bddec/kubepods/burstable/pod8b8cd3ad24ffe5e81425700a3cce8912/d7380a2330844e0d58ff60103b940032e84cf2d23b4fd019a49a5c5cf5ec8b11/freezer.state
	I0114 11:25:20.156690    7644 api_server.go:203] freezer state: "THAWED"
	I0114 11:25:20.157214    7644 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63738/healthz ...
	I0114 11:25:20.176116    7644 api_server.go:278] https://127.0.0.1:63738/healthz returned 200:
	ok
	I0114 11:25:20.176116    7644 status.go:421] multinode-111937 apiserver status = Running (err=<nil>)
	I0114 11:25:20.176116    7644 status.go:257] multinode-111937 status: &{Name:multinode-111937 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 11:25:20.176116    7644 status.go:255] checking status of multinode-111937-m02 ...
	I0114 11:25:20.191699    7644 cli_runner.go:164] Run: docker container inspect multinode-111937-m02 --format={{.State.Status}}
	I0114 11:25:20.404041    7644 status.go:330] multinode-111937-m02 host status = "Running" (err=<nil>)
	I0114 11:25:20.404041    7644 host.go:66] Checking if "multinode-111937-m02" exists ...
	I0114 11:25:20.415502    7644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-111937-m02
	I0114 11:25:20.606008    7644 host.go:66] Checking if "multinode-111937-m02" exists ...
	I0114 11:25:20.618492    7644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 11:25:20.624611    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-111937-m02
	I0114 11:25:20.841143    7644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63809 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-111937-m02\id_rsa Username:docker}
	I0114 11:25:20.990550    7644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 11:25:21.025837    7644 status.go:257] multinode-111937-m02 status: &{Name:multinode-111937-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0114 11:25:21.025837    7644 status.go:255] checking status of multinode-111937-m03 ...
	I0114 11:25:21.042215    7644 cli_runner.go:164] Run: docker container inspect multinode-111937-m03 --format={{.State.Status}}
	I0114 11:25:21.230460    7644 status.go:330] multinode-111937-m03 host status = "Stopped" (err=<nil>)
	I0114 11:25:21.230460    7644 status.go:343] host is not running, skipping remaining checks
	I0114 11:25:21.230551    7644 status.go:257] multinode-111937-m03 status: &{Name:multinode-111937-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (8.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 node start m03 --alsologtostderr: (30.7125493s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 status: (3.4925626s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (126.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-111937
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-111937
E0114 11:26:02.881239    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-111937: (27.4418509s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-111937 --wait=true -v=8 --alsologtostderr
E0114 11:26:25.323806    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:27:05.215068    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-111937 --wait=true -v=8 --alsologtostderr: (1m38.6737642s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-111937
--- PASS: TestMultiNode/serial/RestartKeepsNodes (126.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (15.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 node delete m03: (9.6796461s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr: (2.5567978s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (2.6718803s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (15.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 stop
E0114 11:28:28.421825    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 stop: (25.1673548s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-111937 status: exit status 7 (842.6318ms)

                                                
                                                
-- stdout --
	multinode-111937
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-111937-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr: exit status 7 (785.1051ms)

                                                
                                                
-- stdout --
	multinode-111937
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-111937-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 11:28:44.475794    6392 out.go:296] Setting OutFile to fd 748 ...
	I0114 11:28:44.530638    6392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:28:44.530638    6392 out.go:309] Setting ErrFile to fd 928...
	I0114 11:28:44.530638    6392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:28:44.553600    6392 out.go:303] Setting JSON to false
	I0114 11:28:44.553600    6392 mustload.go:65] Loading cluster: multinode-111937
	I0114 11:28:44.553600    6392 notify.go:220] Checking for updates...
	I0114 11:28:44.554522    6392 config.go:180] Loaded profile config "multinode-111937": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 11:28:44.554522    6392 status.go:255] checking status of multinode-111937 ...
	I0114 11:28:44.569336    6392 cli_runner.go:164] Run: docker container inspect multinode-111937 --format={{.State.Status}}
	I0114 11:28:44.787803    6392 status.go:330] multinode-111937 host status = "Stopped" (err=<nil>)
	I0114 11:28:44.787803    6392 status.go:343] host is not running, skipping remaining checks
	I0114 11:28:44.787803    6392 status.go:257] multinode-111937 status: &{Name:multinode-111937 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 11:28:44.787803    6392 status.go:255] checking status of multinode-111937-m02 ...
	I0114 11:28:44.801702    6392 cli_runner.go:164] Run: docker container inspect multinode-111937-m02 --format={{.State.Status}}
	I0114 11:28:44.989286    6392 status.go:330] multinode-111937-m02 host status = "Stopped" (err=<nil>)
	I0114 11:28:44.989286    6392 status.go:343] host is not running, skipping remaining checks
	I0114 11:28:44.989286    6392 status.go:257] multinode-111937-m02 status: &{Name:multinode-111937-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (116.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-111937 --wait=true -v=8 --alsologtostderr --driver=docker
E0114 11:30:00.692307    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-111937 --wait=true -v=8 --alsologtostderr --driver=docker: (1m53.6219608s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-111937 status --alsologtostderr: (2.5271232s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (116.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (91.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-111937
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-111937-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-111937-m02 --driver=docker: exit status 14 (408.0681ms)

                                                
                                                
-- stdout --
	* [multinode-111937-m02] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-111937-m02' is duplicated with machine name 'multinode-111937-m02' in profile 'multinode-111937'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-111937-m03 --driver=docker
E0114 11:31:02.874671    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-111937-m03 --driver=docker: (1m19.9709509s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-111937
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-111937: exit status 80 (2.3044904s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-111937
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-111937-m03 already exists in multinode-111937-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_45.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-111937-m03
E0114 11:32:05.216283    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-111937-m03: (8.3808893s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (91.41s)

                                                
                                    
x
+
TestPreload (243.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-113225 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-113225 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m7.511518s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-113225 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-113225 -- docker pull gcr.io/k8s-minikube/busybox: (3.0103349s)
preload_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-113225 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.24.6
E0114 11:35:00.695991    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:36:02.880879    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
preload_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-113225 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.24.6: (1m44.7386834s)
preload_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-113225 -- docker images
preload_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-113225 -- docker images: (1.5395576s)
helpers_test.go:175: Cleaning up "test-preload-113225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-113225
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-113225: (6.3524591s)
--- PASS: TestPreload (243.15s)

                                                
                                    
x
+
TestScheduledStopWindows (153.4s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-113628 --memory=2048 --driver=docker
E0114 11:37:05.227718    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-113628 --memory=2048 --driver=docker: (1m19.6528048s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-113628 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-113628 --schedule 5m: (1.649723s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-113628 -n scheduled-stop-113628
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-113628 -n scheduled-stop-113628: (1.5967149s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-113628 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-113628 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.4076019s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-113628 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-113628 --schedule 5s: (3.2015968s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-113628
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-113628: exit status 7 (598.6416ms)

                                                
                                                
-- stdout --
	scheduled-stop-113628
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-113628 -n scheduled-stop-113628
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-113628 -n scheduled-stop-113628: exit status 7 (572.0953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-113628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-113628
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-113628: (4.7013181s)
--- PASS: TestScheduledStopWindows (153.40s)

                                                
                                    
x
+
TestInsufficientStorage (55.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-113902 --memory=2048 --output=json --wait=true --driver=docker
E0114 11:39:06.056438    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-113902 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (47.4041251s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c91353cd-a2ad-4529-8ec7-dca4cc39a79b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-113902] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e27f7602-f012-466a-9da3-27cbdd97b77e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"581de2cd-74d4-4a8e-857c-a5880c925724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"25840b88-8b5c-492f-9553-69844311e516","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"251ab841-05f6-4a33-877d-8f6204198dfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6c20570b-6aa1-4562-b8c5-b3de159503ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b88c46a9-623a-4a10-8eb2-ac0845135c18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e2ca1572-9679-46fb-a53b-c338427d7f7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"09f7c6bf-4feb-43c8-92c3-9d790c9dd224","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"84fe3db8-95ea-44db-b77e-2181b8c1abea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-113902 in cluster insufficient-storage-113902","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa6cde60-f3c1-426c-9f3b-9b953d5baac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f1e67687-9723-48ae-939d-55c23594c319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4218a7c5-9468-48eb-9f36-e11546046fc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-113902 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-113902 --output=json --layout=cluster: exit status 7 (1.3730371s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-113902","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-113902","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 11:39:50.984457    9312 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-113902" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-113902 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-113902 --output=json --layout=cluster: exit status 7 (1.4327824s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-113902","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-113902","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 11:39:52.416997    8864 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-113902" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	E0114 11:39:52.458773    8864 status.go:559] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\insufficient-storage-113902\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-113902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-113902
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-113902: (4.8105932s)
--- PASS: TestInsufficientStorage (55.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (238.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1951921339.exe start -p running-upgrade-114352 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1951921339.exe start -p running-upgrade-114352 --memory=2200 --vm-driver=docker: (2m19.8013138s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-114352 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-114352 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m27.0496224s)
helpers_test.go:175: Cleaning up "running-upgrade-114352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-114352
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-114352: (11.1321822s)
--- PASS: TestRunningBinaryUpgrade (238.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (1m45.1564325s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-114254

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-114254: (5.5612885s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-114254 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-114254 status --format={{.Host}}: exit status 7 (662.7126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker: (1m31.7320898s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-114254 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (476.1591ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-114254] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-114254
	    minikube start -p kubernetes-upgrade-114254 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1142542 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-114254 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-114254 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker: (2m7.9398341s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-114254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-114254

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-114254: (12.8137181s)
--- PASS: TestKubernetesUpgrade (344.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (296.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.4109897440.exe start -p missing-upgrade-113957 --memory=2200 --driver=docker
E0114 11:40:00.706014    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 11:41:02.879521    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:42:05.228373    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.1.4109897440.exe start -p missing-upgrade-113957 --memory=2200 --driver=docker: (3m1.9948368s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-113957
E0114 11:43:05.345035    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-113957: (11.6803782s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-113957
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-113957 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-113957 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m30.9831994s)
helpers_test.go:175: Cleaning up "missing-upgrade-113957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-113957

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-113957: (10.593695s)
--- PASS: TestMissingContainerUpgrade (296.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (539.6671ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-113957] minikube v1.28.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2486 Build 19045.2486
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (130.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --driver=docker: (2m8.0946579s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-113957 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-113957 status -o json: (1.9173146s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (130.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (300.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1709282633.exe start -p stopped-upgrade-113957 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1709282633.exe start -p stopped-upgrade-113957 --memory=2200 --vm-driver=docker: (3m43.6526528s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1709282633.exe -p stopped-upgrade-113957 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.9.0.1709282633.exe -p stopped-upgrade-113957 stop: (16.9465674s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-113957 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-113957 --memory=2200 --alsologtostderr -v=1 --driver=docker: (59.5309614s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (300.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (34.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --no-kubernetes --driver=docker: (25.0636286s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-113957 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-113957 status -o json: exit status 2 (1.6760317s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-113957","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-113957
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-113957: (7.7814573s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (34.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --no-kubernetes --driver=docker: (26.9637298s)
--- PASS: TestNoKubernetes/serial/Start (26.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-113957 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-113957 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.5859935s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (17.022608s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.863901s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-113957
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-113957: (3.0413409s)
--- PASS: TestNoKubernetes/serial/Stop (3.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (12.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-113957 --driver=docker: (12.0507507s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (12.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-113957 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-113957 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.4658535s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-113957
E0114 11:45:00.701928    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-113957: (3.6646053s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.66s)

                                                
                                    
x
+
TestPause/serial/Start (135.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-114751 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-114751 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m15.7304151s)
--- PASS: TestPause/serial/Start (135.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-114913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0114 11:50:00.713489    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-114913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (2m42.3583317s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-114751 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-114751 --alsologtostderr -v=1 --driver=docker: (51.7896925s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (154.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-115022 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-115022 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3: (2m34.6249178s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (154.62s)

                                                
                                    
x
+
TestPause/serial/Pause (2.23s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-114751 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-114751 --alsologtostderr -v=5: (2.2309859s)
--- PASS: TestPause/serial/Pause (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-114751 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-114751 --output=json --layout=cluster: exit status 2 (1.570583s)

                                                
                                                
-- stdout --
	{"Name":"pause-114751","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-114751","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (1.57s)

                                                
                                    
x
+
TestPause/serial/Unpause (2s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-114751 --alsologtostderr -v=5
E0114 11:51:02.883436    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-114751 --alsologtostderr -v=5: (2.0028512s)
--- PASS: TestPause/serial/Unpause (2.00s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-114751 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-114751 --alsologtostderr -v=5: (2.7966444s)
--- PASS: TestPause/serial/PauseAgain (2.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (12.82s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-114751 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-114751 --alsologtostderr -v=5: (12.8174447s)
--- PASS: TestPause/serial/DeletePaused (12.82s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.7s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (17.976579s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-114751
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-114751: exit status 1 (198.2972ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-114751

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (18.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (106.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-115140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-115140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3: (1m46.3284626s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (106.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-114913 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [807ecf0f-67c3-4fdf-99db-b46f5d9d3a8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [807ecf0f-67c3-4fdf-99db-b46f5d9d3a8f] Running
E0114 11:52:05.238192    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.0602366s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-114913 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (13.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-114913 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-114913 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.9205305s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-114913 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-114913 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-114913 --alsologtostderr -v=3: (13.3545803s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (97.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-115223 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-115223 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3: (1m37.3457866s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (97.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-114913 -n old-k8s-version-114913
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-114913 -n old-k8s-version-114913: exit status 7 (651.063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-114913 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (443.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-114913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-114913 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m21.4411775s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-114913 -n old-k8s-version-114913

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-114913 -n old-k8s-version-114913: (2.1781781s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (443.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-115022 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [23fea911-f153-4152-ac19-35742b6c667c] Pending
helpers_test.go:342: "busybox" [23fea911-f153-4152-ac19-35742b6c667c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [23fea911-f153-4152-ac19-35742b6c667c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0498058s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-115022 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-115022 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-115022 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.5860656s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-115022 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-115022 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-115022 --alsologtostderr -v=3: (13.3122212s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-115022 -n no-preload-115022
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-115022 -n no-preload-115022: exit status 7 (662.9691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-115022 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (356.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-115022 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-115022 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.25.3: (5m53.8620604s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-115022 -n no-preload-115022
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-115022 -n no-preload-115022: (2.3042624s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (356.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-115140 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [63785c7b-2be6-4737-90e1-f31cadd8ec89] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [63785c7b-2be6-4737-90e1-f31cadd8ec89] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0401063s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-115140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-115140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-115140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1239052s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-115140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-115140 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-115140 --alsologtostderr -v=3: (13.4304919s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140: exit status 7 (696.7986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-115140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-115140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-115140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.25.3: (5m51.3737015s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140: (3.0480027s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-115223 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-115223 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.3220638s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-115223 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-115223 --alsologtostderr -v=3: (13.8925863s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-115223 -n newest-cni-115223
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-115223 -n newest-cni-115223: exit status 7 (665.8554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-115223 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-115223 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3
E0114 11:55:00.708439    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-115223 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.25.3: (47.4029552s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-115223 -n newest-cni-115223
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-115223 -n newest-cni-115223: (2.8191634s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-115223 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-115223 "sudo crictl images -o json": (2.1513728s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (2.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (13.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-115223 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-115223 --alsologtostderr -v=1: (3.3268894s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-115223 -n newest-cni-115223
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-115223 -n newest-cni-115223: exit status 2 (1.726899s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-115223 -n newest-cni-115223
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-115223 -n newest-cni-115223: exit status 2 (1.7384736s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-115223 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-115223 --alsologtostderr -v=1: (2.1746618s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-115223 -n newest-cni-115223
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-115223 -n newest-cni-115223: (2.5228694s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-115223 -n newest-cni-115223
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-115223 -n newest-cni-115223: (2.4913486s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (13.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (102.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-115542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3
E0114 11:55:46.074145    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:56:02.887917    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 11:57:05.235265    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-115542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3: (1m42.7114922s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (102.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-115542 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [aa98713c-1219-4ae1-be94-24f2d3ff7dc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [aa98713c-1219-4ae1-be94-24f2d3ff7dc6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0409276s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-115542 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-115542 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-115542 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.3662716s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-115542 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-115542 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-115542 --alsologtostderr -v=3: (13.4047592s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-115542 -n embed-certs-115542
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-115542 -n embed-certs-115542: exit status 7 (670.3922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-115542 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (358.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-115542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-115542 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.25.3: (5m55.8485873s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-115542 -n embed-certs-115542

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-115542 -n embed-certs-115542: (2.242031s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (358.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (59.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-n4vzx" [52fa429e-a584-48eb-b7e6-420ba2bdf1a7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0114 11:59:45.355302    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-n4vzx" [52fa429e-a584-48eb-b7e6-420ba2bdf1a7] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 59.0567814s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (59.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (47.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-hrzsh" [45df8e6e-f8ca-4c93-8abf-8b4d1b6d0431] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-hrzsh" [45df8e6e-f8ca-4c93-8abf-8b4d1b6d0431] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 47.0562071s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (47.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (29.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-6r56w" [145547cd-dba8-488e-bcfa-1e64dbdd881a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0114 12:00:00.711735    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-6r56w" [145547cd-dba8-488e-bcfa-1e64dbdd881a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 29.0405815s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (29.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-6r56w" [145547cd-dba8-488e-bcfa-1e64dbdd881a] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0271782s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-114913 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-n4vzx" [52fa429e-a584-48eb-b7e6-420ba2bdf1a7] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0391593s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-115022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-114913 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-114913 "sudo crictl images -o json": (1.8308057s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-115022 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-115022 "sudo crictl images -o json": (2.0954745s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (15.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-114913 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-114913 --alsologtostderr -v=1: (3.868659s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-114913 -n old-k8s-version-114913

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-114913 -n old-k8s-version-114913: exit status 2 (2.0984964s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-114913 -n old-k8s-version-114913

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-114913 -n old-k8s-version-114913: exit status 2 (2.0199994s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-114913 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-114913 --alsologtostderr -v=1: (2.7650249s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-114913 -n old-k8s-version-114913

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-114913 -n old-k8s-version-114913: (2.761173s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-114913 -n old-k8s-version-114913

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-114913 -n old-k8s-version-114913: (2.4780273s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (15.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (16.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-115022 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-115022 --alsologtostderr -v=1: (3.8724557s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-115022 -n no-preload-115022

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-115022 -n no-preload-115022: exit status 2 (2.514509s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-115022 -n no-preload-115022

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-115022 -n no-preload-115022: exit status 2 (2.2870605s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-115022 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-115022 --alsologtostderr -v=1: (3.0193656s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-115022 -n no-preload-115022

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-115022 -n no-preload-115022: (2.6831153s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-115022 -n no-preload-115022

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-115022 -n no-preload-115022: (2.043228s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (16.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-hrzsh" [45df8e6e-f8ca-4c93-8abf-8b4d1b6d0431] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0674918s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-115140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-115140 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-115140 "sudo crictl images -o json": (1.9251589s)

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (27.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-115140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-115140 --alsologtostderr -v=1: (3.540207s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140: exit status 2 (1.4975608s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140: exit status 2 (1.4864518s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-115140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-115140 --alsologtostderr -v=1: (7.7639626s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140
E0114 12:01:02.891681    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140: (10.5774441s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-115140 -n default-k8s-diff-port-115140: (2.1951861s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (27.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (116.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (1m56.1058703s)
--- PASS: TestNetworkPlugins/group/auto/Start (116.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (148.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-114509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-114509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: (2m28.351142s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (148.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-114507 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-114507 "pgrep -a kubelet": (1.6202211s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (41.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-114507 replace --force -f testdata\netcat-deployment.yaml
E0114 12:03:02.878493    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-dw9jl" [23ae5bc0-08c4-4038-9635-c3c97c3d0a1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 12:03:08.011104    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:03:18.264429    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:03:18.990779    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:03:27.368474    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:27.391789    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:27.406749    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:27.438782    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:27.485758    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:27.577798    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:27.750758    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:28.081763    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:28.729367    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:30.021015    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:32.595559    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:37.726941    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:03:38.755508    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
helpers_test.go:342: "netcat-5788d667bd-dw9jl" [23ae5bc0-08c4-4038-9635-c3c97c3d0a1c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 40.0860313s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (41.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-snsbp" [61cb5ece-e8c3-4b67-bb86-1dff3e05cf31] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.050257s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-114507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.6074572s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-114509 "pgrep -a kubelet"
E0114 12:03:47.977728    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-114509 "pgrep -a kubelet": (1.9256927s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (47.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-114509 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-p84fg" [216eb640-0794-4f11-82ed-385345d6ed8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-p84fg" [216eb640-0794-4f11-82ed-385345d6ed8d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 47.018788s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (47.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (66.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-qkx9v" [0b66d413-86ac-43ef-b367-e03065c00766] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-qkx9v" [0b66d413-86ac-43ef-b367-e03065c00766] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m6.0587173s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (66.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (1.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-114509 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Done: kubectl --context kindnet-114509 exec deployment/netcat -- nslookup kubernetes.default: (1.335971s)
--- PASS: TestNetworkPlugins/group/kindnet/DNS (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-114509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-114509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (370.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-114509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p false-114509 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (6m10.2918829s)
--- PASS: TestNetworkPlugins/group/false/Start (370.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-qkx9v" [0b66d413-86ac-43ef-b367-e03065c00766] Running
E0114 12:05:00.713465    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.113007s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-115542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-115542 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-115542 "sudo crictl images -o json": (1.9975155s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-115542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-115542 --alsologtostderr -v=1: (4.0487253s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-115542 -n embed-certs-115542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-115542 -n embed-certs-115542: exit status 2 (2.453248s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-115542 -n embed-certs-115542
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-115542 -n embed-certs-115542: exit status 2 (1.6889652s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-115542 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-115542 --alsologtostderr -v=1: (3.6889917s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-115542 -n embed-certs-115542
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-115542 -n embed-certs-115542: (2.4201523s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-115542 -n embed-certs-115542
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-115542 -n embed-certs-115542: (2.7015764s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (17.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (366.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E0114 12:06:02.897501    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
E0114 12:06:11.361653    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:06:56.954828    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:07:05.249551    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 12:07:24.766366    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\old-k8s-version-114913\client.crt: The system cannot find the path specified.
E0114 12:07:57.686589    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:08:03.613246    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:03.627829    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:03.643520    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:03.673949    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:03.720511    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:03.815020    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:03.987345    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:04.315582    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:04.960788    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:06.244626    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:08.819515    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:13.941979    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:24.196371    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:25.501074    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\no-preload-115022\client.crt: The system cannot find the path specified.
E0114 12:08:27.368533    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:08:41.566861    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:41.582400    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:41.597567    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:41.629196    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:41.675150    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:41.769896    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:41.941971    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:42.272041    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:42.920858    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:44.206677    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:44.677804    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:08:46.770172    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:51.905247    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:08:55.208413    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\default-k8s-diff-port-115140\client.crt: The system cannot find the path specified.
E0114 12:09:02.149447    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:09:22.644089    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:09:25.646092    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.
E0114 12:10:00.722915    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\ingress-addon-legacy-110215\client.crt: The system cannot find the path specified.
E0114 12:10:03.611865    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.
E0114 12:10:47.579779    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\auto-114507\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (6m6.15532s)
--- PASS: TestNetworkPlugins/group/bridge/Start (366.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-114509 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-114509 "pgrep -a kubelet": (1.7387012s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (34.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-114509 replace --force -f testdata\netcat-deployment.yaml
E0114 12:11:02.902854    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-102159\client.crt: The system cannot find the path specified.
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-gcqcp" [8647d250-62e3-49fb-bad7-f8ce6884aea8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-gcqcp" [8647d250-62e3-49fb-bad7-f8ce6884aea8] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 34.0329176s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (34.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (105.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
E0114 12:11:25.544517    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kindnet-114509\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (1m45.7836804s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (105.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-114507 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-114507 "pgrep -a kubelet": (1.4932503s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (27.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-114507 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vqvnz" [3d5a207b-8faf-4161-94b1-adefabae688c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-vqvnz" [3d5a207b-8faf-4161-94b1-adefabae688c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 26.550975s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (27.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-114507 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-114507 "pgrep -a kubelet": (1.5406488s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (26.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-114507 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-zfkgb" [c8260814-ae80-45f9-a4a8-df1ba8fc5f97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-zfkgb" [c8260814-ae80-45f9-a4a8-df1ba8fc5f97] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 26.0303983s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (26.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-114507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (101.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-114507 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (1m41.6640621s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (101.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-114507 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-114507 "pgrep -a kubelet": (1.4894966s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (26.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-114507 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-7rm74" [8b08d5d9-3c7f-4713-a47e-469a1af1432f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-7rm74" [8b08d5d9-3c7f-4713-a47e-469a1af1432f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 25.0273441s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (26.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-114507 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.53s)

                                                
                                    

Test skip (25/280)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (33.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 35.6346ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-gnst2" [11fb5c8b-0b9c-4492-aae2-d58196cbae9f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0302968s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-zwz5z" [90b99ced-157d-4438-aa6a-126546ed0338] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0764219s
addons_test.go:297: (dbg) Run:  kubectl --context addons-100931 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-100931 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-100931 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (22.8954937s)
addons_test.go:312: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (33.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (46.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-100931 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Done: kubectl --context addons-100931 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (4.3711268s)
addons_test.go:189: (dbg) Run:  kubectl --context addons-100931 replace --force -f testdata\nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:189: (dbg) Done: kubectl --context addons-100931 replace --force -f testdata\nginx-ingress-v1.yaml: (4.0344489s)
addons_test.go:202: (dbg) Run:  kubectl --context addons-100931 replace --force -f testdata\nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) Done: kubectl --context addons-100931 replace --force -f testdata\nginx-pod-svc.yaml: (6.5899253s)
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [07f55586-2b8a-4239-8e35-d85c477ccfbf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [07f55586-2b8a-4239-8e35-d85c477ccfbf] Running
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 29.2684519s
addons_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-100931 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe -p addons-100931 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.6685407s)
addons_test.go:239: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (46.10s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-102159 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:909: output didn't produce a URL
functional_test.go:903: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-102159 --alsologtostderr -v=1] ...
helpers_test.go:500: unable to terminate pid 10208: Access is denied.
E0114 10:32:05.195860    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:37:05.197388    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:38:28.372399    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:42:05.200507    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:47:05.198870    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:52:05.191347    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:55:08.391911    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
E0114 10:57:05.194316    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-102159 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-102159 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-5bzgt" [1a3886dd-50a8-4eb5-a8c2-c40f666e343b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-5bzgt" [1a3886dd-50a8-4eb5-a8c2-c40f666e343b] Running
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.1500948s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.03s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:169: (dbg) Run:  kubectl --context ingress-addon-legacy-110215 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:169: (dbg) Done: kubectl --context ingress-addon-legacy-110215 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.0942798s)
addons_test.go:189: (dbg) Run:  kubectl --context ingress-addon-legacy-110215 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context ingress-addon-legacy-110215 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [0d90f67d-73d1-4d13-abaf-8aa892bd3c14] Pending
helpers_test.go:342: "nginx" [0d90f67d-73d1-4d13-abaf-8aa892bd3c14] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [0d90f67d-73d1-4d13-abaf-8aa892bd3c14] Running
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 36.1721932s
addons_test.go:219: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-110215 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:219: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-110215 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.3778226s)
addons_test.go:239: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.06s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:175: Cleaning up "disable-driver-mounts-115020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-115020
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-115020: (1.5513975s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (1.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-114507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-114507
E0114 11:45:08.437499    9968 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-100931\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-114507: (1.6713286s)
--- SKIP: TestNetworkPlugins/group/flannel (1.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (1.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-114509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-114509
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-114509: (1.6129536s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (1.61s)

                                                
                                    
Copied to clipboard