Test Report: Docker_Windows 13730

                    
                      eb19396baacb27bcde6912a0ea5aa6419fc16109:2022-03-29:23253
                    
                

Test fail (10/272)

Order failed test Duration
41 TestCertExpiration 920.56
86 TestFunctional/parallel/ServiceCmd 2142.84
209 TestSkaffold 136.66
224 TestNoKubernetes/serial/ProfileList 18.14
256 TestNetworkPlugins/group/cilium/Start 931.55
260 TestNetworkPlugins/group/calico/Start 915.51
270 TestNetworkPlugins/group/kindnet/Start 359.13
276 TestNetworkPlugins/group/enable-default-cni/DNS 359.74
286 TestNetworkPlugins/group/bridge/DNS 371.76
292 TestNetworkPlugins/group/kubenet/DNS 315.09
x
+
TestCertExpiration (920.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220329190729-1328 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-20220329190729-1328 --memory=2048 --cert-expiration=3m --driver=docker: exit status 80 (5m51.1605615s)

                                                
                                                
-- stdout --
	* [cert-expiration-20220329190729-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node cert-expiration-20220329190729-1328 in cluster cert-expiration-20220329190729-1328
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20220329190729-1328" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-20220329190729-1328 --name cert-expiration-20220329190729-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-20220329190729-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-20220329190729-1328 --network cert-expiration-20220329190729-1328 --ip 192.168.76.2 --volume cert-expiration-20220329190729-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdc
a30a55218347b5: exit status 125
	stdout:
	f852d3d4685e416c80fef9fe46816a85ddfa6939a7687f4b8c51bc42e1dfb9ee
	
	stderr:
	docker: Error response from daemon: network cert-expiration-20220329190729-1328 not found.
	
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20220329190729-1328" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-20220329190729-1328 --name cert-expiration-20220329190729-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-20220329190729-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-20220329190729-1328 --network cert-expiration-20220329190729-1328 --ip 192.168.76.2 --volume cert-expiration-20220329190729-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds
:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	4a1456d4488a91a68fecbbd527435dca4cffdca6936a11d943d46d4b1fc64c80
	
	stderr:
	docker: Error response from daemon: network cert-expiration-20220329190729-1328 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-20220329190729-1328 --name cert-expiration-20220329190729-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-20220329190729-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-20220329190729-1328 --network cert-expiration-20220329190729-1328 --ip 192.168.76.2 --volume cert-expiration-20220329190729-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d3
03827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	4a1456d4488a91a68fecbbd527435dca4cffdca6936a11d943d46d4b1fc64c80
	
	stderr:
	docker: Error response from daemon: network cert-expiration-20220329190729-1328 not found.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:126: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-expiration-20220329190729-1328 --memory=2048 --cert-expiration=3m --driver=docker" : exit status 80
E0329 19:13:22.771403    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220329190729-1328 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220329190729-1328 --memory=2048 --cert-expiration=8760h --driver=docker: (5m48.1231918s)
cert_options_test.go:137: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20220329190729-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20220329190729-1328 in cluster cert-expiration-20220329190729-1328
	* Pulling base image ...
	* docker "cert-expiration-20220329190729-1328" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	  - Want kubectl v1.23.5? Try 'minikube kubectl -- get pods -A'
	* Done! kubectl is now configured to use "cert-expiration-20220329190729-1328" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5.

                                                
                                                
** /stderr **
cert_options_test.go:139: *** TestCertExpiration FAILED at 2022-03-29 19:22:09.1410266 +0000 GMT m=+7666.732640501
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect cert-expiration-20220329190729-1328
helpers_test.go:236: (dbg) docker inspect cert-expiration-20220329190729-1328:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e9adcaedb6a39fb21795fd28098bfb8c8c46666e5199f4d17caf3f8726b8602",
	        "Created": "2022-03-29T19:20:50.5533447Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 218487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T19:20:52.7820033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6e9adcaedb6a39fb21795fd28098bfb8c8c46666e5199f4d17caf3f8726b8602/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e9adcaedb6a39fb21795fd28098bfb8c8c46666e5199f4d17caf3f8726b8602/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e9adcaedb6a39fb21795fd28098bfb8c8c46666e5199f4d17caf3f8726b8602/hosts",
	        "LogPath": "/var/lib/docker/containers/6e9adcaedb6a39fb21795fd28098bfb8c8c46666e5199f4d17caf3f8726b8602/6e9adcaedb6a39fb21795fd28098bfb8c8c46666e5199f4d17caf3f8726b8602-json.log",
	        "Name": "/cert-expiration-20220329190729-1328",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-expiration-20220329190729-1328:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "cert-expiration-20220329190729-1328",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d682dc39f10667f866547efb27293200814d92789ac1f10c6b2c860834299301-init/diff:/var/lib/docker/overlay2/4eae5e38ad3553f9f0fde74ad732117b98cb0e1af550ecd7ce386997eede943f/diff:/var/lib/docker/overlay2/6789b74c71a0164bd481c99dc53318989abbcdc33b160f5d04f44aee12c80671/diff:/var/lib/docker/overlay2/91c6ac2f9a1035ebae76daccc83a3cafe5d26b2bd6b60ad54a6e29588a7003f8/diff:/var/lib/docker/overlay2/a916d7329da723d8397bfda8e20f2beb9156ceece20236242a811e43984bbfeb/diff:/var/lib/docker/overlay2/b046f566fd53b4f2f6d2c347c752b47f6c1a64316baeaa8c0fda825346ef7aba/diff:/var/lib/docker/overlay2/13a76ad56283b88db0508d09cc281c66801cee04cdbdd8f00827788d5231a025/diff:/var/lib/docker/overlay2/8e95b9ffc444e9f6b52db61f07f0a93bb3feb51b5d9dab6b7df487fef8d277f6/diff:/var/lib/docker/overlay2/bf807f6bedece6f8033221974e6b2ffdf94a6f9320d4f09337ed51b411f8f999/diff:/var/lib/docker/overlay2/d8184ca2707eba09a4f6bd90cad4795ce0f226f863f2d84723287ad76f1158d8/diff:/var/lib/docker/overlay2/390685
8e1746cab95814956b950325758e0765c0a6597b3d9062a4c36ab409be/diff:/var/lib/docker/overlay2/128db97cb7dee3d09e506aaaf97a45b5a647d8eb90782f5dd444aec15ff525da/diff:/var/lib/docker/overlay2/713bbf0f0ba84035f3a06b59c058ccfe9e7639f2ecb9d3db244e1adec7b6c46b/diff:/var/lib/docker/overlay2/6a820465cd423660c71cbb6741a47e4619efcf0010ac49bd49146501b9ac4925/diff:/var/lib/docker/overlay2/20c66385f330043e2c50b8193a59172de08776bbabdca289cb51c1b5f17e9b98/diff:/var/lib/docker/overlay2/7b2439fa81d8ff403bd5767752380391449aeba92453e1846fd36cfce9e6de61/diff:/var/lib/docker/overlay2/ee227ab74915b1419cfbc67f2b14b08cf564b4a38a39b157de2c65250a9172bf/diff:/var/lib/docker/overlay2/0b92e2531a28b01133cc2ab65802b03c04ef0213e850ac8558c9c4071fd018dd/diff:/var/lib/docker/overlay2/3de4968e9a773e45d79b096d23038e48758528adce69f14e7ff3a93bbd3192d7/diff:/var/lib/docker/overlay2/92eb87a3831ecebb34eb1e0ea7a71af9883f8426f35387845769f5fe75f04a52/diff:/var/lib/docker/overlay2/82a4c6fc3869bde23593a8490af76e406ad5a27ef1c30a38b481944390f7466e/diff:/var/lib/d
ocker/overlay2/6c957b5c04708287c2261d895a0f4563f25cc766eb21913c4ceb36f27a04914e/diff:/var/lib/docker/overlay2/21df3fb223398ef06fb62c4617e3487f0ac955e4f38ee3d2d72c9da488d436c7/diff:/var/lib/docker/overlay2/ddaf18203a4027208ea592b9716939849af0aa5d2cac57d2b0c36382e078f483/diff:/var/lib/docker/overlay2/9a82b4c496462c1bf59ccb096f886e61674d92540023b7fed618682584358cbf/diff:/var/lib/docker/overlay2/62a8d9c5758a93af517541ab9d841f9415f55ca5503844371b7e35d47838dbb0/diff:/var/lib/docker/overlay2/c17d3885b54e341402c392175e2ab4ff1ab038acafe82a8090b1725613597f95/diff:/var/lib/docker/overlay2/d1401e4d6e04dded3c7d0335e32d0eb6cf2d7c19d21da53b836d591dddac8961/diff:/var/lib/docker/overlay2/7c4934c7f4f9cce1a35b340eebbc473f9bb33153f61f1c0454bffd0b2ae5a37e/diff:/var/lib/docker/overlay2/02d6bd07f6dbb7198d2c42fe26ff2efbabb9a889dfa0b79fd05e06a021bc81b4/diff:/var/lib/docker/overlay2/137f83b86485992317df9126e714cd331df51131ac4990d1040cf54cace6506e/diff:/var/lib/docker/overlay2/75d1117a1f5f001df3981193d1251ab8426eb4c100c9c1bbb946f0c2e0e
1d73c/diff:/var/lib/docker/overlay2/b20542be533b230be3dee06af0364759a81f26397d9371a7052efdac48fc1a3e/diff:/var/lib/docker/overlay2/b6103a89043f339bfc18a195b11f4a57f6042806725aac9d6b8db0e2af4fe01e/diff:/var/lib/docker/overlay2/69041f5eef389b325dd43fa81731c884299e2cb880a57ba904b8752c12446236/diff:/var/lib/docker/overlay2/8bc9de0232e5ba86f129e746c52a7f53836827a1a9cfc8e0c731d81af17b92a4/diff:/var/lib/docker/overlay2/5494bafa4607149ff46b2ed95fd9c86139339508d3c27bf32346963a41ae95f1/diff:/var/lib/docker/overlay2/daaadc749b2e3fb99bb23ec4d0a908e70deef3f9caff12f7b3fa29a57086e13a/diff:/var/lib/docker/overlay2/35b939c7fd0daf3717995c2aff595f96a741b48ae2da6b523aeda782ea3922e9/diff:/var/lib/docker/overlay2/b5a01cc1c410e803d28949ef6f35b55ac04473d89beb188d9d4866287b7cbbee/diff:/var/lib/docker/overlay2/c26c0af38634a15c6619c42bd2e5ec804bab550ff8078c084ba220030d8f4b93/diff:/var/lib/docker/overlay2/c12adb9eba87b6903ac0b2e16234b6a4f11a66d10d30d5379b19963433b76506/diff:/var/lib/docker/overlay2/537ea8129185a2faaaafa08ee553e15fe2cee0
4e80dab99066f779573324b53c/diff:/var/lib/docker/overlay2/ba74848f80f8d422a61241b3778f2395a32e73958e6a6dfddf5724bd0367dc67/diff:/var/lib/docker/overlay2/be8013e1c023e08543e181408137e02941d2b05181428b80bf154108c0cf48a5/diff:/var/lib/docker/overlay2/895568f040b89c0f90e7f4e41a1a77ca025acd0a0e0682a242f830a2e9c4ede7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d682dc39f10667f866547efb27293200814d92789ac1f10c6b2c860834299301/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d682dc39f10667f866547efb27293200814d92789ac1f10c6b2c860834299301/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d682dc39f10667f866547efb27293200814d92789ac1f10c6b2c860834299301/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "cert-expiration-20220329190729-1328",
	                "Source": "/var/lib/docker/volumes/cert-expiration-20220329190729-1328/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-expiration-20220329190729-1328",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-expiration-20220329190729-1328",
	                "name.minikube.sigs.k8s.io": "cert-expiration-20220329190729-1328",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "881fc35d8713ef06d8454a6774fe6ea5e6316b4487e623e6d777eb7f06378575",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57522"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57523"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57524"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57521"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/881fc35d8713",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "cert-expiration-20220329190729-1328": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6e9adcaedb6a",
	                        "cert-expiration-20220329190729-1328"
	                    ],
	                    "NetworkID": "5a5c47b9507ff52c37db4a0187a438c47e8b33899445a2db5b23a9981bb7d3a8",
	                    "EndpointID": "363e1480dc92faa95f2cd7123c0fd89c3c03ae42a1caa1709068d7a72f0ba73d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220329190729-1328 -n cert-expiration-20220329190729-1328
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-20220329190729-1328 -n cert-expiration-20220329190729-1328: (4.5487794s)
helpers_test.go:245: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-expiration-20220329190729-1328 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p cert-expiration-20220329190729-1328 logs -n 25: (8.8495524s)
helpers_test.go:253: TestCertExpiration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |       User        | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                     | kubernetes-upgrade-20220329190043-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:07:53 GMT | Tue, 29 Mar 2022 19:08:21 GMT |
	|         | kubernetes-upgrade-20220329190043-1328 |                                        |                   |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220329190043-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:08:23 GMT | Tue, 29 Mar 2022 19:09:53 GMT |
	|         | kubernetes-upgrade-20220329190043-1328 |                                        |                   |         |                               |                               |
	|         | --memory=2200                          |                                        |                   |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0      |                                        |                   |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |                   |         |                               |                               |
	| start   | -p                                     | docker-flags-20220329190750-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:07:50 GMT | Tue, 29 Mar 2022 19:10:06 GMT |
	|         | docker-flags-20220329190750-1328       |                                        |                   |         |                               |                               |
	|         | --cache-images=false                   |                                        |                   |         |                               |                               |
	|         | --memory=2048                          |                                        |                   |         |                               |                               |
	|         | --install-addons=false                 |                                        |                   |         |                               |                               |
	|         | --wait=false                           |                                        |                   |         |                               |                               |
	|         | --docker-env=FOO=BAR                   |                                        |                   |         |                               |                               |
	|         | --docker-env=BAZ=BAT                   |                                        |                   |         |                               |                               |
	|         | --docker-opt=debug                     |                                        |                   |         |                               |                               |
	|         | --docker-opt=icc=true                  |                                        |                   |         |                               |                               |
	|         | --alsologtostderr -v=5                 |                                        |                   |         |                               |                               |
	|         | --driver=docker                        |                                        |                   |         |                               |                               |
	| -p      | docker-flags-20220329190750-1328       | docker-flags-20220329190750-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:10:07 GMT | Tue, 29 Mar 2022 19:10:10 GMT |
	|         | ssh sudo systemctl show                |                                        |                   |         |                               |                               |
	|         | docker --property=Environment          |                                        |                   |         |                               |                               |
	|         | --no-pager                             |                                        |                   |         |                               |                               |
	| -p      | docker-flags-20220329190750-1328       | docker-flags-20220329190750-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:10:11 GMT | Tue, 29 Mar 2022 19:10:14 GMT |
	|         | ssh sudo systemctl show docker         |                                        |                   |         |                               |                               |
	|         | --property=ExecStart --no-pager        |                                        |                   |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20220329190043-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:09:54 GMT | Tue, 29 Mar 2022 19:10:16 GMT |
	|         | kubernetes-upgrade-20220329190043-1328 |                                        |                   |         |                               |                               |
	|         | --memory=2200                          |                                        |                   |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0      |                                        |                   |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker |                                        |                   |         |                               |                               |
	| delete  | -p                                     | docker-flags-20220329190750-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:10:15 GMT | Tue, 29 Mar 2022 19:10:32 GMT |
	|         | docker-flags-20220329190750-1328       |                                        |                   |         |                               |                               |
	| delete  | -p                                     | kubernetes-upgrade-20220329190043-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:10:17 GMT | Tue, 29 Mar 2022 19:10:33 GMT |
	|         | kubernetes-upgrade-20220329190043-1328 |                                        |                   |         |                               |                               |
	| start   | -p                                     | cert-options-20220329191032-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:10:32 GMT | Tue, 29 Mar 2022 19:12:23 GMT |
	|         | cert-options-20220329191032-1328       |                                        |                   |         |                               |                               |
	|         | --memory=2048                          |                                        |                   |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1              |                                        |                   |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15          |                                        |                   |         |                               |                               |
	|         | --apiserver-names=localhost            |                                        |                   |         |                               |                               |
	|         | --apiserver-names=www.google.com       |                                        |                   |         |                               |                               |
	|         | --apiserver-port=8555                  |                                        |                   |         |                               |                               |
	|         | --driver=docker                        |                                        |                   |         |                               |                               |
	|         | --apiserver-name=localhost             |                                        |                   |         |                               |                               |
	| -p      | cert-options-20220329191032-1328       | cert-options-20220329191032-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:12:23 GMT | Tue, 29 Mar 2022 19:12:28 GMT |
	|         | ssh openssl x509 -text -noout -in      |                                        |                   |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt  |                                        |                   |         |                               |                               |
	| ssh     | -p                                     | cert-options-20220329191032-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:12:29 GMT | Tue, 29 Mar 2022 19:12:33 GMT |
	|         | cert-options-20220329191032-1328       |                                        |                   |         |                               |                               |
	|         | -- sudo cat                            |                                        |                   |         |                               |                               |
	|         | /etc/kubernetes/admin.conf             |                                        |                   |         |                               |                               |
	| start   | -p auto-20220329190226-1328            | auto-20220329190226-1328               | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:10:33 GMT | Tue, 29 Mar 2022 19:13:11 GMT |
	|         | --memory=2048                          |                                        |                   |         |                               |                               |
	|         | --alsologtostderr                      |                                        |                   |         |                               |                               |
	|         | --wait=true --wait-timeout=5m          |                                        |                   |         |                               |                               |
	|         | --driver=docker                        |                                        |                   |         |                               |                               |
	| ssh     | -p auto-20220329190226-1328            | auto-20220329190226-1328               | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:13:11 GMT | Tue, 29 Mar 2022 19:13:15 GMT |
	|         | pgrep -a kubelet                       |                                        |                   |         |                               |                               |
	| delete  | -p                                     | cert-options-20220329191032-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:12:33 GMT | Tue, 29 Mar 2022 19:13:20 GMT |
	|         | cert-options-20220329191032-1328       |                                        |                   |         |                               |                               |
	| delete  | -p auto-20220329190226-1328            | auto-20220329190226-1328               | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:13:43 GMT | Tue, 29 Mar 2022 19:14:15 GMT |
	| start   | -p                                     | force-systemd-env-20220329190726-1328  | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:07:26 GMT | Tue, 29 Mar 2022 19:14:33 GMT |
	|         | force-systemd-env-20220329190726-1328  |                                        |                   |         |                               |                               |
	|         | --memory=2048 --alsologtostderr -v=5   |                                        |                   |         |                               |                               |
	|         | --driver=docker                        |                                        |                   |         |                               |                               |
	| -p      | force-systemd-env-20220329190726-1328  | force-systemd-env-20220329190726-1328  | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:14:33 GMT | Tue, 29 Mar 2022 19:14:37 GMT |
	|         | ssh docker info --format               |                                        |                   |         |                               |                               |
	|         | {{.CgroupDriver}}                      |                                        |                   |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20220329190726-1328  | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:14:37 GMT | Tue, 29 Mar 2022 19:15:09 GMT |
	|         | force-systemd-env-20220329190726-1328  |                                        |                   |         |                               |                               |
	| start   | -p                                     | custom-weave-20220329190230-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:15:09 GMT | Tue, 29 Mar 2022 19:17:26 GMT |
	|         | custom-weave-20220329190230-1328       |                                        |                   |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |                   |         |                               |                               |
	|         | --wait=true --wait-timeout=5m          |                                        |                   |         |                               |                               |
	|         | --cni=testdata\weavenet.yaml           |                                        |                   |         |                               |                               |
	|         | --driver=docker                        |                                        |                   |         |                               |                               |
	| ssh     | -p                                     | custom-weave-20220329190230-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:17:26 GMT | Tue, 29 Mar 2022 19:17:30 GMT |
	|         | custom-weave-20220329190230-1328       |                                        |                   |         |                               |                               |
	|         | pgrep -a kubelet                       |                                        |                   |         |                               |                               |
	| delete  | -p                                     | custom-weave-20220329190230-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:17:52 GMT | Tue, 29 Mar 2022 19:18:04 GMT |
	|         | custom-weave-20220329190230-1328       |                                        |                   |         |                               |                               |
	| start   | -p false-20220329190230-1328           | false-20220329190230-1328              | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:18:04 GMT | Tue, 29 Mar 2022 19:20:57 GMT |
	|         | --memory=2048                          |                                        |                   |         |                               |                               |
	|         | --alsologtostderr --wait=true          |                                        |                   |         |                               |                               |
	|         | --wait-timeout=5m --cni=false          |                                        |                   |         |                               |                               |
	|         | --driver=docker                        |                                        |                   |         |                               |                               |
	| ssh     | -p false-20220329190230-1328           | false-20220329190230-1328              | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:20:57 GMT | Tue, 29 Mar 2022 19:21:02 GMT |
	|         | pgrep -a kubelet                       |                                        |                   |         |                               |                               |
	| delete  | -p false-20220329190230-1328           | false-20220329190230-1328              | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:21:33 GMT | Tue, 29 Mar 2022 19:21:55 GMT |
	| start   | -p                                     | cert-expiration-20220329190729-1328    | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 19:16:21 GMT | Tue, 29 Mar 2022 19:22:09 GMT |
	|         | cert-expiration-20220329190729-1328    |                                        |                   |         |                               |                               |
	|         | --memory=2048                          |                                        |                   |         |                               |                               |
	|         | --cert-expiration=8760h                |                                        |                   |         |                               |                               |
	|         | --driver=docker                        |                                        |                   |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 19:21:55
	Running on machine: minikube8
	Binary: Built with gc go1.17.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 19:21:55.571459    3060 out.go:297] Setting OutFile to fd 1852 ...
	I0329 19:21:55.639464    3060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:21:55.639464    3060 out.go:310] Setting ErrFile to fd 1908...
	I0329 19:21:55.639464    3060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:21:55.654455    3060 out.go:304] Setting JSON to false
	I0329 19:21:55.656455    3060 start.go:114] hostinfo: {"hostname":"minikube8","uptime":8912,"bootTime":1648572803,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 19:21:55.656455    3060 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 19:21:55.666473    3060 out.go:176] * [kindnet-20220329190230-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 19:21:55.666473    3060 notify.go:193] Checking for updates...
	I0329 19:21:55.678467    3060 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:21:55.686467    3060 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 19:21:55.690471    3060 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 19:21:53.882864    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:56.045501    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:55.693498    3060 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 19:21:55.695484    3060 config.go:176] Loaded profile config "calico-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:21:55.695484    3060 config.go:176] Loaded profile config "cert-expiration-20220329190729-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:21:55.696486    3060 config.go:176] Loaded profile config "cilium-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:21:55.696486    3060 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 19:21:58.022478    3060 docker.go:137] docker version: linux-20.10.13
	I0329 19:21:58.030485    3060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:21:58.783447    3060 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:83 OomKillDisable:true NGoroutines:60 SystemTime:2022-03-29 19:21:58.4022559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:21:58.789286    3060 out.go:176] * Using the docker driver based on user configuration
	I0329 19:21:58.789286    3060 start.go:283] selected driver: docker
	I0329 19:21:58.789286    3060 start.go:800] validating driver "docker" against <nil>
	I0329 19:21:58.789286    3060 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 19:21:58.923237    3060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:21:59.760623    3060 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:84 OomKillDisable:true NGoroutines:61 SystemTime:2022-03-29 19:21:59.3543198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:21:59.760623    3060 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 19:21:59.761550    3060 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 19:21:59.761550    3060 cni.go:93] Creating CNI manager for "kindnet"
	I0329 19:21:59.761550    3060 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 19:21:59.761550    3060 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 19:21:59.761550    3060 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0329 19:21:59.761550    3060 start_flags.go:306] config:
	{Name:kindnet-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 19:21:59.767551    3060 out.go:176] * Starting control plane node kindnet-20220329190230-1328 in cluster kindnet-20220329190230-1328
	I0329 19:21:59.767551    3060 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 19:21:59.777555    3060 out.go:176] * Pulling base image ...
	I0329 19:21:59.777555    3060 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:21:59.777555    3060 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 19:21:59.778595    3060 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 19:21:59.778595    3060 cache.go:57] Caching tarball of preloaded images
	I0329 19:21:59.778595    3060 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 19:21:59.778595    3060 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 19:21:59.779558    3060 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\config.json ...
	I0329 19:21:59.779558    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\config.json: {Name:mk6dcdefc191c30bb34c1c8319cc8490444e173c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:22:00.328900    3060 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 19:22:00.328900    3060 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 19:22:00.328900    3060 cache.go:208] Successfully downloaded all kic artifacts
	I0329 19:22:00.328900    3060 start.go:348] acquiring machines lock for kindnet-20220329190230-1328: {Name:mk93919b231bfab46578efb1f64d7a60b9cbb338 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 19:22:00.328900    3060 start.go:352] acquired machines lock for "kindnet-20220329190230-1328" in 0s
	I0329 19:22:00.328900    3060 start.go:90] Provisioning new machine with config: &{Name:kindnet-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220329190230-1328 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:22:00.330945    3060 start.go:127] createHost starting for "" (driver="docker")
	I0329 19:21:56.168143    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:58.308282    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:00.317919    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:00.336920    3060 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 19:22:00.336920    3060 start.go:161] libmachine.API.Create for "kindnet-20220329190230-1328" (driver="docker")
	I0329 19:22:00.336920    3060 client.go:168] LocalClient.Create starting
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Decoding PEM data...
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Parsing certificate...
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Decoding PEM data...
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Parsing certificate...
	I0329 19:22:00.347908    3060 cli_runner.go:133] Run: docker network inspect kindnet-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 19:21:58.073149    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:00.525490    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:02.500057    7144 out.go:203]   - Generating certificates and keys ...
	I0329 19:22:02.507057    7144 out.go:203]   - Booting up control plane ...
	I0329 19:22:02.819857    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:05.325327    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	W0329 19:22:00.882273    3060 cli_runner.go:180] docker network inspect kindnet-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:22:00.895271    3060 network_create.go:262] running [docker network inspect kindnet-20220329190230-1328] to gather additional debugging logs...
	I0329 19:22:00.895271    3060 cli_runner.go:133] Run: docker network inspect kindnet-20220329190230-1328
	W0329 19:22:01.452273    3060 cli_runner.go:180] docker network inspect kindnet-20220329190230-1328 returned with exit code 1
	I0329 19:22:01.452273    3060 network_create.go:265] error running [docker network inspect kindnet-20220329190230-1328]: docker network inspect kindnet-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220329190230-1328
	I0329 19:22:01.452273    3060 network_create.go:267] output of [docker network inspect kindnet-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220329190230-1328
	
	** /stderr **
	I0329 19:22:01.461276    3060 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 19:22:02.040411    3060 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e470] misses:0}
	I0329 19:22:02.041403    3060 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:22:02.041403    3060 network_create.go:114] attempt to create docker network kindnet-20220329190230-1328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 19:22:02.048399    3060 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220329190230-1328
	I0329 19:22:02.814865    3060 network_create.go:98] docker network kindnet-20220329190230-1328 192.168.49.0/24 created
	I0329 19:22:02.814865    3060 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20220329190230-1328" container
	I0329 19:22:02.828851    3060 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 19:22:03.395451    3060 cli_runner.go:133] Run: docker volume create kindnet-20220329190230-1328 --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true
	I0329 19:22:03.935415    3060 oci.go:102] Successfully created a docker volume kindnet-20220329190230-1328
	I0329 19:22:03.947282    3060 cli_runner.go:133] Run: docker run --rm --name kindnet-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --entrypoint /usr/bin/test -v kindnet-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 19:22:02.516052    7144 out.go:203]   - Configuring RBAC rules ...
	I0329 19:22:02.522054    7144 cni.go:93] Creating CNI manager for ""
	I0329 19:22:02.522054    7144 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 19:22:02.522054    7144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 19:22:02.548067    7144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=cert-expiration-20220329190729-1328 minikube.k8s.io/updated_at=2022_03_29T19_22_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:22:02.548067    7144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:22:02.586060    7144 ops.go:34] apiserver oom_adj: -16
	I0329 19:22:03.012493    7144 kubeadm.go:1020] duration metric: took 490.4366ms to wait for elevateKubeSystemPrivileges.
	I0329 19:22:04.963907    7144 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=cert-expiration-20220329190729-1328 minikube.k8s.io/updated_at=2022_03_29T19_22_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (2.4158259s)
	I0329 19:22:04.964919    7144 kubeadm.go:393] StartCluster complete in 40.0915936s
	I0329 19:22:04.964919    7144 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:22:04.964919    7144 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:22:04.967915    7144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:22:05.733660    7144 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cert-expiration-20220329190729-1328" rescaled to 1
	I0329 19:22:05.733660    7144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 19:22:05.734671    7144 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:22:05.734671    7144 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 19:22:03.083233    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:05.580131    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:05.734671    7144 addons.go:65] Setting storage-provisioner=true in profile "cert-expiration-20220329190729-1328"
	I0329 19:22:05.741658    7144 out.go:176] * Verifying Kubernetes components...
	I0329 19:22:05.741658    7144 addons.go:153] Setting addon storage-provisioner=true in "cert-expiration-20220329190729-1328"
	W0329 19:22:05.741658    7144 addons.go:165] addon storage-provisioner should already be in state true
	I0329 19:22:05.734671    7144 addons.go:65] Setting default-storageclass=true in profile "cert-expiration-20220329190729-1328"
	I0329 19:22:05.741658    7144 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-20220329190729-1328"
	I0329 19:22:05.734671    7144 config.go:176] Loaded profile config "cert-expiration-20220329190729-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:22:05.741658    7144 host.go:66] Checking if "cert-expiration-20220329190729-1328" exists ...
	I0329 19:22:05.759666    7144 cli_runner.go:133] Run: docker container inspect cert-expiration-20220329190729-1328 --format={{.State.Status}}
	I0329 19:22:05.759666    7144 cli_runner.go:133] Run: docker container inspect cert-expiration-20220329190729-1328 --format={{.State.Status}}
	I0329 19:22:05.761693    7144 ssh_runner.go:195] Run: sudo service kubelet status
	I0329 19:22:05.931464    7144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 19:22:05.945469    7144 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220329190729-1328
	I0329 19:22:06.374509    7144 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 19:22:06.375492    7144 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:22:06.375492    7144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 19:22:06.394487    7144 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220329190729-1328
	I0329 19:22:06.401488    7144 addons.go:153] Setting addon default-storageclass=true in "cert-expiration-20220329190729-1328"
	W0329 19:22:06.401488    7144 addons.go:165] addon default-storageclass should already be in state true
	I0329 19:22:06.401488    7144 host.go:66] Checking if "cert-expiration-20220329190729-1328" exists ...
	I0329 19:22:06.428477    7144 cli_runner.go:133] Run: docker container inspect cert-expiration-20220329190729-1328 --format={{.State.Status}}
	I0329 19:22:06.536484    7144 api_server.go:51] waiting for apiserver process to appear ...
	I0329 19:22:06.553473    7144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 19:22:06.979980    7144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57522 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cert-expiration-20220329190729-1328\id_rsa Username:docker}
	I0329 19:22:07.011940    7144 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 19:22:07.011940    7144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 19:22:07.026936    7144 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220329190729-1328
	I0329 19:22:07.190967    7144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:22:07.584362    7144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57522 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cert-expiration-20220329190729-1328\id_rsa Username:docker}
	I0329 19:22:07.982573    7144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.0510966s)
	I0329 19:22:07.982573    7144 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0329 19:22:07.982573    7144 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.4290918s)
	I0329 19:22:07.982573    7144 api_server.go:71] duration metric: took 2.2478882s to wait for apiserver process to appear ...
	I0329 19:22:07.982573    7144 api_server.go:87] waiting for apiserver healthz status ...
	I0329 19:22:07.982573    7144 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57521/healthz ...
	I0329 19:22:08.009572    7144 api_server.go:266] https://127.0.0.1:57521/healthz returned 200:
	ok
	I0329 19:22:08.012609    7144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 19:22:08.015587    7144 api_server.go:140] control plane version: v1.23.5
	I0329 19:22:08.015587    7144 api_server.go:130] duration metric: took 33.0138ms to wait for apiserver health ...
	I0329 19:22:08.015587    7144 system_pods.go:43] waiting for kube-system pods to appear ...
	I0329 19:22:08.075588    7144 system_pods.go:59] 4 kube-system pods found
	I0329 19:22:08.075588    7144 system_pods.go:61] "etcd-cert-expiration-20220329190729-1328" [a4c744ca-2dae-4792-ae64-9b5298c48f87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0329 19:22:08.075588    7144 system_pods.go:61] "kube-apiserver-cert-expiration-20220329190729-1328" [7b54b912-c0df-412b-8a7c-7f79089e3a0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0329 19:22:08.075588    7144 system_pods.go:61] "kube-controller-manager-cert-expiration-20220329190729-1328" [4122b692-b871-4825-b1a5-a40629a77030] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0329 19:22:08.075588    7144 system_pods.go:61] "kube-scheduler-cert-expiration-20220329190729-1328" [7d9509ed-527a-48b6-97d9-2492de5728fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0329 19:22:08.075588    7144 system_pods.go:74] duration metric: took 60.0006ms to wait for pod list to return data ...
	I0329 19:22:08.075588    7144 kubeadm.go:548] duration metric: took 2.3409026s to wait for : map[apiserver:true system_pods:true] ...
	I0329 19:22:08.075588    7144 node_conditions.go:102] verifying NodePressure condition ...
	I0329 19:22:08.095583    7144 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0329 19:22:08.095583    7144 node_conditions.go:123] node cpu capacity is 16
	I0329 19:22:08.095583    7144 node_conditions.go:105] duration metric: took 19.9949ms to run NodePressure ...
	I0329 19:22:08.095583    7144 start.go:213] waiting for startup goroutines ...
	I0329 19:22:08.475365    7144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2836421s)
	I0329 19:22:08.787257    7144 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 19:22:08.788280    7144 addons.go:417] enableAddons completed in 3.0535912s
	I0329 19:22:09.061821    7144 start.go:498] kubectl: 1.18.2, cluster: 1.23.5 (minor skew: 5)
	I0329 19:22:09.067837    7144 out.go:176] 
	W0329 19:22:09.068917    7144 out.go:241] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5.
	I0329 19:22:09.079843    7144 out.go:176]   - Want kubectl v1.23.5? Try 'minikube kubectl -- get pods -A'
	I0329 19:22:09.082861    7144 out.go:176] * Done! kubectl is now configured to use "cert-expiration-20220329190729-1328" cluster and "default" namespace by default
	I0329 19:22:07.824415    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:10.313321    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:07.297404    3060 cli_runner.go:186] Completed: docker run --rm --name kindnet-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --entrypoint /usr/bin/test -v kindnet-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (3.3496831s)
	I0329 19:22:07.297404    3060 oci.go:106] Successfully prepared a docker volume kindnet-20220329190230-1328
	I0329 19:22:07.297553    3060 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:22:07.297553    3060 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 19:22:07.306519    3060 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 19:22:08.067578    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:10.612505    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 19:20:53 UTC, end at Tue 2022-03-29 19:22:20 UTC. --
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.796552000Z" level=info msg="Starting up"
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.802754700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.802907900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.802957100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.802986700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.806737600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.806877400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.806909500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 19:21:06 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:06.806932300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.091940100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.118636800Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.118864700Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.118884300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.118897300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.118910800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.118924900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.119417900Z" level=info msg="Loading containers: start."
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.497500900Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.628190100Z" level=info msg="Loading containers: done."
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.819974700Z" level=info msg="Docker daemon" commit=906f57f graphdriver(s)=overlay2 version=20.10.13
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.820132400Z" level=info msg="Daemon has completed initialization"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 systemd[1]: Started Docker Application Container Engine.
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.941307200Z" level=info msg="API listen on [::]:2376"
	Mar 29 19:21:07 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:07.964023700Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 29 19:21:55 cert-expiration-20220329190729-1328 dockerd[471]: time="2022-03-29T19:21:55.256694000Z" level=info msg="ignoring event" container=18cab7150785779618a51a9ff96813592838ab6d6a2ab4d8178f1327cdcda987 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bf1cdbe4a20ef       b0c9e5e4dbb14       24 seconds ago      Running             kube-controller-manager   1                   2ada330dad59c
	c57b86f45c9a6       25f8c7f3da61c       44 seconds ago      Running             etcd                      0                   979cb0256ae4f
	5e2dba3aacfee       884d49d6d8c9f       44 seconds ago      Running             kube-scheduler            0                   45418d6e560dd
	18cab71507857       b0c9e5e4dbb14       44 seconds ago      Exited              kube-controller-manager   0                   2ada330dad59c
	a5210787d0248       3fc1d62d65872       44 seconds ago      Running             kube-apiserver            0                   2d24c5f54f997
	
	* 
	* ==> describe nodes <==
	* Name:               cert-expiration-20220329190729-1328
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-expiration-20220329190729-1328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=cert-expiration-20220329190729-1328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T19_22_02_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 19:21:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-expiration-20220329190729-1328
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 19:22:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 19:22:13 +0000   Tue, 29 Mar 2022 19:21:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 19:22:13 +0000   Tue, 29 Mar 2022 19:21:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 19:22:13 +0000   Tue, 29 Mar 2022 19:21:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 19:22:13 +0000   Tue, 29 Mar 2022 19:22:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    cert-expiration-20220329190729-1328
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                140a143b31184b58be947b52a01fff83
	  Boot ID:                    c6888bb0-0d7a-4902-95ce-20313bf24adc
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-q4js5                                        100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     6s
	  kube-system                 etcd-cert-expiration-20220329190729-1328                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-apiserver-cert-expiration-20220329190729-1328             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 kube-controller-manager-cert-expiration-20220329190729-1328    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-proxy-sn7nc                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kube-scheduler-cert-expiration-20220329190729-1328             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  kube-system                 storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet  Node cert-expiration-20220329190729-1328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet  Node cert-expiration-20220329190729-1328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)  kubelet  Node cert-expiration-20220329190729-1328 status is now: NodeHasSufficientPID
	  Normal  Starting                 18s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s                kubelet  Node cert-expiration-20220329190729-1328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s                kubelet  Node cert-expiration-20220329190729-1328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s                kubelet  Node cert-expiration-20220329190729-1328 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                8s                 kubelet  Node cert-expiration-20220329190729-1328 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000113] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000050] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.006224] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.003599] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000453] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.079129] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Mar29 19:07] WSL2: Performing memory compaction.
	[Mar29 19:08] WSL2: Performing memory compaction.
	[Mar29 19:10] WSL2: Performing memory compaction.
	[Mar29 19:11] WSL2: Performing memory compaction.
	[Mar29 19:12] WSL2: Performing memory compaction.
	[Mar29 19:13] WSL2: Performing memory compaction.
	[Mar29 19:14] WSL2: Performing memory compaction.
	[Mar29 19:16] WSL2: Performing memory compaction.
	[Mar29 19:17] WSL2: Performing memory compaction.
	[Mar29 19:18] WSL2: Performing memory compaction.
	[Mar29 19:19] WSL2: Performing memory compaction.
	[Mar29 19:21] hrtimer: interrupt took 376500 ns
	
	* 
	* ==> etcd [c57b86f45c9a] <==
	* {"level":"info","ts":"2022-03-29T19:22:15.489Z","caller":"traceutil/trace.go:171","msg":"trace[1799190612] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:419; }","duration":"156.9482ms","start":"2022-03-29T19:22:15.332Z","end":"2022-03-29T19:22:15.489Z","steps":["trace[1799190612] 'agreement among raft nodes before linearized reading'  (duration: 97.8735ms)","trace[1799190612] 'range keys from in-memory index tree'  (duration: 58.7982ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T19:22:15.490Z","caller":"traceutil/trace.go:171","msg":"trace[1581604826] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"145.4961ms","start":"2022-03-29T19:22:15.344Z","end":"2022-03-29T19:22:15.490Z","steps":["trace[1581604826] 'process raft request'  (duration: 145.1402ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T19:22:15.490Z","caller":"traceutil/trace.go:171","msg":"trace[1889808564] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"151.5908ms","start":"2022-03-29T19:22:15.339Z","end":"2022-03-29T19:22:15.490Z","steps":["trace[1889808564] 'process raft request'  (duration: 89.8969ms)","trace[1889808564] 'compare'  (duration: 60.3581ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T19:22:15.490Z","caller":"traceutil/trace.go:171","msg":"trace[200638197] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"142.9989ms","start":"2022-03-29T19:22:15.347Z","end":"2022-03-29T19:22:15.490Z","steps":["trace[200638197] 'process raft request'  (duration: 142.3187ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T19:22:15.490Z","caller":"traceutil/trace.go:171","msg":"trace[149888094] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"146.9958ms","start":"2022-03-29T19:22:15.343Z","end":"2022-03-29T19:22:15.490Z","steps":["trace[149888094] 'process raft request'  (duration: 146.0028ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T19:22:15.491Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"144.1521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-public/default\" ","response":"range_response_count:1 size:181"}
	{"level":"info","ts":"2022-03-29T19:22:15.491Z","caller":"traceutil/trace.go:171","msg":"trace[918326421] range","detail":"{range_begin:/registry/serviceaccounts/kube-public/default; range_end:; response_count:1; response_revision:429; }","duration":"144.3307ms","start":"2022-03-29T19:22:15.347Z","end":"2022-03-29T19:22:15.491Z","steps":["trace[918326421] 'agreement among raft nodes before linearized reading'  (duration: 144.106ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T19:22:15.659Z","caller":"traceutil/trace.go:171","msg":"trace[1877124612] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"110.0417ms","start":"2022-03-29T19:22:15.549Z","end":"2022-03-29T19:22:15.659Z","steps":["trace[1877124612] 'process raft request'  (duration: 93.8619ms)","trace[1877124612] 'compare'  (duration: 15.847ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T19:22:16.254Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:217"}
	{"level":"info","ts":"2022-03-29T19:22:16.254Z","caller":"traceutil/trace.go:171","msg":"trace[991327872] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:452; }","duration":"113.1273ms","start":"2022-03-29T19:22:16.141Z","end":"2022-03-29T19:22:16.254Z","steps":["trace[991327872] 'range keys from in-memory index tree'  (duration: 112.2081ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T19:22:16.948Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638327318028791105,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-03-29T19:22:17.449Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15638327318028791105,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-03-29T19:22:17.513Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.2369777s","expected-duration":"1s"}
	{"level":"info","ts":"2022-03-29T19:22:17.514Z","caller":"traceutil/trace.go:171","msg":"trace[1415518772] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:464; }","duration":"1.0665467s","start":"2022-03-29T19:22:16.447Z","end":"2022-03-29T19:22:17.514Z","steps":["trace[1415518772] 'read index received'  (duration: 1.0665302s)","trace[1415518772] 'applied index is now lower than readState.Index'  (duration: 12µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T19:22:17.521Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.0739477s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T19:22:17.521Z","caller":"traceutil/trace.go:171","msg":"trace[412171520] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:453; }","duration":"1.0742973s","start":"2022-03-29T19:22:16.447Z","end":"2022-03-29T19:22:17.521Z","steps":["trace[412171520] 'agreement among raft nodes before linearized reading'  (duration: 1.0667587s)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T19:22:17.521Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T19:22:16.447Z","time spent":"1.0743843s","remote":"127.0.0.1:52832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-03-29T19:22:17.521Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T19:22:16.420Z","time spent":"1.101295s","remote":"127.0.0.1:53360","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2022-03-29T19:22:21.683Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"207.0549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"range_response_count:1 size:4724"}
	{"level":"warn","ts":"2022-03-29T19:22:21.683Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"956.9423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T19:22:21.683Z","caller":"traceutil/trace.go:171","msg":"trace[1585775047] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:456; }","duration":"957.0166ms","start":"2022-03-29T19:22:20.726Z","end":"2022-03-29T19:22:21.683Z","steps":["trace[1585775047] 'range keys from in-memory index tree'  (duration: 956.2183ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T19:22:21.684Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T19:22:20.726Z","time spent":"957.0893ms","remote":"127.0.0.1:52832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-03-29T19:22:21.683Z","caller":"traceutil/trace.go:171","msg":"trace[334367703] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:456; }","duration":"207.2483ms","start":"2022-03-29T19:22:21.476Z","end":"2022-03-29T19:22:21.683Z","steps":["trace[334367703] 'range keys from in-memory index tree'  (duration: 206.9241ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T19:22:21.684Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"241.5283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T19:22:21.684Z","caller":"traceutil/trace.go:171","msg":"trace[1964069823] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:456; }","duration":"241.6024ms","start":"2022-03-29T19:22:21.442Z","end":"2022-03-29T19:22:21.684Z","steps":["trace[1964069823] 'range keys from in-memory index tree'  (duration: 241.3625ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:22:22 up  2:11,  0 users,  load average: 12.42, 7.01, 5.28
	Linux cert-expiration-20220329190729-1328 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a5210787d024] <==
	* I0329 19:21:53.772880       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0329 19:21:53.843762       1 trace.go:205] Trace[2126972864]: "Create" url:/api/v1/namespaces,user-agent:Go-http-client/2.0,audit-id:c880d2c2-b7b6-47d1-b24c-8e105102793f,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Mar-2022 19:21:47.875) (total time: 5968ms):
	Trace[2126972864]: ---"Object stored in database" 5967ms (19:21:53.843)
	Trace[2126972864]: [5.9681977s] [5.9681977s] END
	I0329 19:21:53.869974       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0329 19:21:53.870002       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0329 19:21:56.595542       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0329 19:21:58.448288       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0329 19:21:58.743725       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0329 19:21:59.128013       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0329 19:21:59.150974       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0329 19:21:59.153555       1 controller.go:611] quota admission added evaluator for: endpoints
	I0329 19:21:59.247997       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0329 19:22:00.135977       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0329 19:22:02.051894       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0329 19:22:02.168567       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0329 19:22:02.249605       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0329 19:22:13.350688       1 trace.go:205] Trace[2028502654]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/node-controller,user-agent:kube-controller-manager/v1.23.5 (linux/amd64) kubernetes/c285e78/kube-controller-manager,audit-id:bff93973-9940-4354-b40b-bef288e70243,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Mar-2022 19:22:12.815) (total time: 534ms):
	Trace[2028502654]: ---"About to write a response" 534ms (19:22:13.350)
	Trace[2028502654]: [534.9845ms] [534.9845ms] END
	I0329 19:22:13.350782       1 trace.go:205] Trace[2103342587]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/service-controller,user-agent:kube-controller-manager/v1.23.5 (linux/amd64) kubernetes/c285e78/tokens-controller,audit-id:9b1b8828-c207-4cca-a56c-2b24c9fd762c,client:192.168.76.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Mar-2022 19:22:12.814) (total time: 535ms):
	Trace[2103342587]: ---"About to write a response" 535ms (19:22:13.350)
	Trace[2103342587]: [535.8053ms] [535.8053ms] END
	I0329 19:22:15.143180       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0329 19:22:15.340638       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [18cab7150785] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc000a53500, {0x4d4fe80, 0xc000498c60}, 0x8ed)
		/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc000a53500, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:606 +0x112
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:574
	crypto/tls.(*Conn).Read(0xc000a53500, {0xc000ab9000, 0x1000, 0x919560})
		/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
	bufio.(*Reader).Read(0xc000434060, {0xc0001c22e0, 0x9, 0x934bc2})
		/usr/local/go/src/bufio/bufio.go:227 +0x1b4
	io.ReadAtLeast({0x4d47860, 0xc000434060}, {0xc0001c22e0, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:328 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0001c22e0, 0x9, 0xc001b617a0}, {0x4d47860, 0xc000434060})
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001c22a0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000427f98)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000925380)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5
	
	* 
	* ==> kube-controller-manager [bf1cdbe4a20e] <==
	* I0329 19:22:14.933795       1 event.go:294] "Event occurred" object="cert-expiration-20220329190729-1328" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node cert-expiration-20220329190729-1328 event: Registered Node cert-expiration-20220329190729-1328 in Controller"
	I0329 19:22:14.934545       1 shared_informer.go:247] Caches are synced for PV protection 
	I0329 19:22:14.934852       1 shared_informer.go:247] Caches are synced for expand 
	I0329 19:22:14.934875       1 shared_informer.go:247] Caches are synced for attach detach 
	I0329 19:22:14.940012       1 shared_informer.go:247] Caches are synced for disruption 
	I0329 19:22:14.940212       1 disruption.go:371] Sending events to api server.
	I0329 19:22:14.942226       1 shared_informer.go:247] Caches are synced for stateful set 
	I0329 19:22:14.942455       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0329 19:22:14.946582       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0329 19:22:14.951299       1 shared_informer.go:247] Caches are synced for deployment 
	I0329 19:22:15.036182       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0329 19:22:15.038842       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0329 19:22:15.047026       1 shared_informer.go:247] Caches are synced for job 
	I0329 19:22:15.050912       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0329 19:22:15.052986       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 19:22:15.127094       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 19:22:15.133204       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0329 19:22:15.135835       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0329 19:22:15.136594       1 shared_informer.go:247] Caches are synced for cronjob 
	I0329 19:22:15.349941       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sn7nc"
	I0329 19:22:15.531743       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 19:22:15.532675       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 1"
	I0329 19:22:15.627210       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 19:22:15.627250       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 19:22:15.791641       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-q4js5"
	
	* 
	* ==> kube-scheduler [5e2dba3aacfe] <==
	* W0329 19:21:52.503795       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 19:21:52.503920       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 19:21:52.696767       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 19:21:52.696888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 19:21:52.947219       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0329 19:21:52.947337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 19:21:53.314822       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0329 19:21:53.314957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0329 19:21:53.393897       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 19:21:53.394078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 19:21:53.477448       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0329 19:21:53.477568       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0329 19:21:54.130945       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 19:21:54.131035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0329 19:21:54.237041       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0329 19:21:54.237206       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0329 19:21:54.851096       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 19:21:54.851170       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 19:21:54.962952       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 19:21:54.963101       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0329 19:21:55.056831       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0329 19:21:55.056994       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0329 19:21:55.287659       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 19:21:55.288267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0329 19:22:04.951062       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 19:20:53 UTC, end at Tue 2022-03-29 19:22:22 UTC. --
	Mar 29 19:22:04 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:04.443103    2194 reconciler.go:157] "Reconciler: start to sync state"
	Mar 29 19:22:04 cert-expiration-20220329190729-1328 kubelet[2194]: E0329 19:22:04.484040    2194 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-expiration-20220329190729-1328\" already exists" pod="kube-system/kube-apiserver-cert-expiration-20220329190729-1328"
	Mar 29 19:22:04 cert-expiration-20220329190729-1328 kubelet[2194]: E0329 19:22:04.687187    2194 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"etcd-cert-expiration-20220329190729-1328\" already exists" pod="kube-system/etcd-cert-expiration-20220329190729-1328"
	Mar 29 19:22:04 cert-expiration-20220329190729-1328 kubelet[2194]: E0329 19:22:04.687277    2194 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-cert-expiration-20220329190729-1328\" already exists" pod="kube-system/kube-controller-manager-cert-expiration-20220329190729-1328"
	Mar 29 19:22:14 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:14.933923    2194 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 29 19:22:14 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:14.936995    2194 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 29 19:22:14 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:14.937774    2194 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.270577    2194 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.458771    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2n2\" (UniqueName: \"kubernetes.io/projected/48a80e82-c61c-47ce-9b5d-111896a071c9-kube-api-access-9j2n2\") pod \"storage-provisioner\" (UID: \"48a80e82-c61c-47ce-9b5d-111896a071c9\") " pod="kube-system/storage-provisioner"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.459214    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48a80e82-c61c-47ce-9b5d-111896a071c9-tmp\") pod \"storage-provisioner\" (UID: \"48a80e82-c61c-47ce-9b5d-111896a071c9\") " pod="kube-system/storage-provisioner"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.530952    2194 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.727786    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0cb3839-3217-42d5-9323-8d98fdef8fe7-kube-proxy\") pod \"kube-proxy-sn7nc\" (UID: \"d0cb3839-3217-42d5-9323-8d98fdef8fe7\") " pod="kube-system/kube-proxy-sn7nc"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.728425    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkx4\" (UniqueName: \"kubernetes.io/projected/d0cb3839-3217-42d5-9323-8d98fdef8fe7-kube-api-access-vzkx4\") pod \"kube-proxy-sn7nc\" (UID: \"d0cb3839-3217-42d5-9323-8d98fdef8fe7\") " pod="kube-system/kube-proxy-sn7nc"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.728541    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0cb3839-3217-42d5-9323-8d98fdef8fe7-lib-modules\") pod \"kube-proxy-sn7nc\" (UID: \"d0cb3839-3217-42d5-9323-8d98fdef8fe7\") " pod="kube-system/kube-proxy-sn7nc"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.728634    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0cb3839-3217-42d5-9323-8d98fdef8fe7-xtables-lock\") pod \"kube-proxy-sn7nc\" (UID: \"d0cb3839-3217-42d5-9323-8d98fdef8fe7\") " pod="kube-system/kube-proxy-sn7nc"
	Mar 29 19:22:15 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:15.836205    2194 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 19:22:16 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:16.034387    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k96lx\" (UniqueName: \"kubernetes.io/projected/fd59e68f-9d0b-4b9e-a82d-b7c8695b08c2-kube-api-access-k96lx\") pod \"coredns-64897985d-q4js5\" (UID: \"fd59e68f-9d0b-4b9e-a82d-b7c8695b08c2\") " pod="kube-system/coredns-64897985d-q4js5"
	Mar 29 19:22:16 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:16.034596    2194 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd59e68f-9d0b-4b9e-a82d-b7c8695b08c2-config-volume\") pod \"coredns-64897985d-q4js5\" (UID: \"fd59e68f-9d0b-4b9e-a82d-b7c8695b08c2\") " pod="kube-system/coredns-64897985d-q4js5"
	Mar 29 19:22:18 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:18.519836    2194 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="26538294ee5c7e5c6a3b7bf052dfffae60d9de0fa811a9a2d0f0c9fc5631e691"
	Mar 29 19:22:18 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:18.528045    2194 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e2ffc51fbb6b9aaa52bab1575a40ee6342b70f58a3d1f0a97db5e1afa11806f3"
	Mar 29 19:22:18 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:18.534920    2194 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="168d1d0c841abc6f4ed810da729e2be8bc71d5cb97b636149ecd75cbcc7abfcc"
	Mar 29 19:22:21 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:21.720194    2194 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-q4js5 through plugin: invalid network status for"
	Mar 29 19:22:22 cert-expiration-20220329190729-1328 kubelet[2194]: I0329 19:22:22.598496    2194 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-q4js5 through plugin: invalid network status for"
	Mar 29 19:22:22 cert-expiration-20220329190729-1328 kubelet[2194]: E0329 19:22:22.603190    2194 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 0c9625690410f01f058eb60d8ad4b90e85e76087187af6657b7fd0a46ee98ead" containerID="0c9625690410f01f058eb60d8ad4b90e85e76087187af6657b7fd0a46ee98ead"
	Mar 29 19:22:22 cert-expiration-20220329190729-1328 kubelet[2194]: E0329 19:22:22.603333    2194 kuberuntime_manager.go:1079] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: 0c9625690410f01f058eb60d8ad4b90e85e76087187af6657b7fd0a46ee98ead" pod="kube-system/coredns-64897985d-q4js5"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-20220329190729-1328 -n cert-expiration-20220329190729-1328
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-20220329190729-1328 -n cert-expiration-20220329190729-1328: (4.5763732s)
helpers_test.go:262: (dbg) Run:  kubectl --context cert-expiration-20220329190729-1328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestCertExpiration]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context cert-expiration-20220329190729-1328 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context cert-expiration-20220329190729-1328 describe pod : exit status 1 (276.6643ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context cert-expiration-20220329190729-1328 describe pod : exit status 1
helpers_test.go:176: Cleaning up "cert-expiration-20220329190729-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220329190729-1328
E0329 19:22:30.875671    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:30.891659    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:30.907672    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:30.938666    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:30.986041    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:31.079445    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:31.252629    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:31.588251    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:32.234285    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:33.528880    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:36.103510    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:22:41.228646    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220329190729-1328: (21.7883415s)
--- FAIL: TestCertExpiration (920.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (2142.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) Run:  kubectl --context functional-20220329172957-1328 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1449: (dbg) Run:  kubectl --context functional-20220329172957-1328 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) Done: kubectl --context functional-20220329172957-1328 expose deployment hello-node --type=NodePort --port=8080: (1.614281s)
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-fbwng" [2b0b3d47-096f-4948-b082-4425fa3b9347] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-fbwng" [2b0b3d47-096f-4948-b082-4425fa3b9347] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 29.0714796s
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 service list: (4.4645162s)
functional_test.go:1473: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1402: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1473: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 service --namespace=default --https --url hello-node: exit status 1 (34m49.7659986s)

                                                
                                                
-- stdout --
	https://127.0.0.1:54716

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1475: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220329172957-1328 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1412: service test failed - dumping debug information
functional_test.go:1413: -----------------------service failure post-mortem--------------------------------
functional_test.go:1416: (dbg) Run:  kubectl --context functional-20220329172957-1328 describe po hello-node
functional_test.go:1420: hello-node pod describe:
Name:         hello-node-54fbb85-fbwng
Namespace:    default
Priority:     0
Node:         functional-20220329172957-1328/192.168.49.2
Start Time:   Tue, 29 Mar 2022 17:34:33 +0000
Labels:       app=hello-node
pod-template-hash=54fbb85
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID:   docker://8a011d36f04518c00a47cdd6aca2a232c7009b68001c82075a8b2771940a7f21
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Tue, 29 Mar 2022 17:34:56 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6vstv (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-6vstv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-54fbb85-fbwng to functional-20220329172957-1328
Normal  Pulling    35m        kubelet, functional-20220329172957-1328  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     35m        kubelet, functional-20220329172957-1328  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 18.8322242s
Normal  Created    35m        kubelet, functional-20220329172957-1328  Created container echoserver
Normal  Started    35m        kubelet, functional-20220329172957-1328  Started container echoserver

                                                
                                                
Name:         hello-node-connect-74cf8bc446-9gn2c
Namespace:    default
Priority:     0
Node:         functional-20220329172957-1328/192.168.49.2
Start Time:   Tue, 29 Mar 2022 17:34:19 +0000
Labels:       app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations:  <none>
Status:       Running
IP:           172.17.0.4
IPs:
IP:           172.17.0.4
Controlled By:  ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID:   docker://a62edced389d923b591afe6bfca4f13ac740bad9c5e5416b634dc1abf07b0007
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Tue, 29 Mar 2022 17:34:38 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wdvbw (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-wdvbw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-connect-74cf8bc446-9gn2c to functional-20220329172957-1328
Normal  Pulling    35m        kubelet, functional-20220329172957-1328  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     35m        kubelet, functional-20220329172957-1328  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 15.1532054s
Normal  Created    35m        kubelet, functional-20220329172957-1328  Created container echoserver
Normal  Started    35m        kubelet, functional-20220329172957-1328  Started container echoserver

                                                
                                                
functional_test.go:1422: (dbg) Run:  kubectl --context functional-20220329172957-1328 logs -l app=hello-node
functional_test.go:1426: hello-node logs:
functional_test.go:1428: (dbg) Run:  kubectl --context functional-20220329172957-1328 describe svc hello-node
functional_test.go:1432: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.110.182.112
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32305/TCP
Endpoints:                172.17.0.6:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20220329172957-1328
helpers_test.go:236: (dbg) docker inspect functional-20220329172957-1328:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc92b10f555ce9add4b98e8a609299325e736406227b8a0a83a1465f548f1af3",
	        "Created": "2022-03-29T17:30:44.4643371Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 22909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:30:46.0261337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/fc92b10f555ce9add4b98e8a609299325e736406227b8a0a83a1465f548f1af3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc92b10f555ce9add4b98e8a609299325e736406227b8a0a83a1465f548f1af3/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc92b10f555ce9add4b98e8a609299325e736406227b8a0a83a1465f548f1af3/hosts",
	        "LogPath": "/var/lib/docker/containers/fc92b10f555ce9add4b98e8a609299325e736406227b8a0a83a1465f548f1af3/fc92b10f555ce9add4b98e8a609299325e736406227b8a0a83a1465f548f1af3-json.log",
	        "Name": "/functional-20220329172957-1328",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220329172957-1328:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220329172957-1328",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b1dbc0e0cd8ce08725be3f673171c349064c8c93ca6062426b455f1fdddd97dd-init/diff:/var/lib/docker/overlay2/4eae5e38ad3553f9f0fde74ad732117b98cb0e1af550ecd7ce386997eede943f/diff:/var/lib/docker/overlay2/6789b74c71a0164bd481c99dc53318989abbcdc33b160f5d04f44aee12c80671/diff:/var/lib/docker/overlay2/91c6ac2f9a1035ebae76daccc83a3cafe5d26b2bd6b60ad54a6e29588a7003f8/diff:/var/lib/docker/overlay2/a916d7329da723d8397bfda8e20f2beb9156ceece20236242a811e43984bbfeb/diff:/var/lib/docker/overlay2/b046f566fd53b4f2f6d2c347c752b47f6c1a64316baeaa8c0fda825346ef7aba/diff:/var/lib/docker/overlay2/13a76ad56283b88db0508d09cc281c66801cee04cdbdd8f00827788d5231a025/diff:/var/lib/docker/overlay2/8e95b9ffc444e9f6b52db61f07f0a93bb3feb51b5d9dab6b7df487fef8d277f6/diff:/var/lib/docker/overlay2/bf807f6bedece6f8033221974e6b2ffdf94a6f9320d4f09337ed51b411f8f999/diff:/var/lib/docker/overlay2/d8184ca2707eba09a4f6bd90cad4795ce0f226f863f2d84723287ad76f1158d8/diff:/var/lib/docker/overlay2/390685
8e1746cab95814956b950325758e0765c0a6597b3d9062a4c36ab409be/diff:/var/lib/docker/overlay2/128db97cb7dee3d09e506aaaf97a45b5a647d8eb90782f5dd444aec15ff525da/diff:/var/lib/docker/overlay2/713bbf0f0ba84035f3a06b59c058ccfe9e7639f2ecb9d3db244e1adec7b6c46b/diff:/var/lib/docker/overlay2/6a820465cd423660c71cbb6741a47e4619efcf0010ac49bd49146501b9ac4925/diff:/var/lib/docker/overlay2/20c66385f330043e2c50b8193a59172de08776bbabdca289cb51c1b5f17e9b98/diff:/var/lib/docker/overlay2/7b2439fa81d8ff403bd5767752380391449aeba92453e1846fd36cfce9e6de61/diff:/var/lib/docker/overlay2/ee227ab74915b1419cfbc67f2b14b08cf564b4a38a39b157de2c65250a9172bf/diff:/var/lib/docker/overlay2/0b92e2531a28b01133cc2ab65802b03c04ef0213e850ac8558c9c4071fd018dd/diff:/var/lib/docker/overlay2/3de4968e9a773e45d79b096d23038e48758528adce69f14e7ff3a93bbd3192d7/diff:/var/lib/docker/overlay2/92eb87a3831ecebb34eb1e0ea7a71af9883f8426f35387845769f5fe75f04a52/diff:/var/lib/docker/overlay2/82a4c6fc3869bde23593a8490af76e406ad5a27ef1c30a38b481944390f7466e/diff:/var/lib/d
ocker/overlay2/6c957b5c04708287c2261d895a0f4563f25cc766eb21913c4ceb36f27a04914e/diff:/var/lib/docker/overlay2/21df3fb223398ef06fb62c4617e3487f0ac955e4f38ee3d2d72c9da488d436c7/diff:/var/lib/docker/overlay2/ddaf18203a4027208ea592b9716939849af0aa5d2cac57d2b0c36382e078f483/diff:/var/lib/docker/overlay2/9a82b4c496462c1bf59ccb096f886e61674d92540023b7fed618682584358cbf/diff:/var/lib/docker/overlay2/62a8d9c5758a93af517541ab9d841f9415f55ca5503844371b7e35d47838dbb0/diff:/var/lib/docker/overlay2/c17d3885b54e341402c392175e2ab4ff1ab038acafe82a8090b1725613597f95/diff:/var/lib/docker/overlay2/d1401e4d6e04dded3c7d0335e32d0eb6cf2d7c19d21da53b836d591dddac8961/diff:/var/lib/docker/overlay2/7c4934c7f4f9cce1a35b340eebbc473f9bb33153f61f1c0454bffd0b2ae5a37e/diff:/var/lib/docker/overlay2/02d6bd07f6dbb7198d2c42fe26ff2efbabb9a889dfa0b79fd05e06a021bc81b4/diff:/var/lib/docker/overlay2/137f83b86485992317df9126e714cd331df51131ac4990d1040cf54cace6506e/diff:/var/lib/docker/overlay2/75d1117a1f5f001df3981193d1251ab8426eb4c100c9c1bbb946f0c2e0e
1d73c/diff:/var/lib/docker/overlay2/b20542be533b230be3dee06af0364759a81f26397d9371a7052efdac48fc1a3e/diff:/var/lib/docker/overlay2/b6103a89043f339bfc18a195b11f4a57f6042806725aac9d6b8db0e2af4fe01e/diff:/var/lib/docker/overlay2/69041f5eef389b325dd43fa81731c884299e2cb880a57ba904b8752c12446236/diff:/var/lib/docker/overlay2/8bc9de0232e5ba86f129e746c52a7f53836827a1a9cfc8e0c731d81af17b92a4/diff:/var/lib/docker/overlay2/5494bafa4607149ff46b2ed95fd9c86139339508d3c27bf32346963a41ae95f1/diff:/var/lib/docker/overlay2/daaadc749b2e3fb99bb23ec4d0a908e70deef3f9caff12f7b3fa29a57086e13a/diff:/var/lib/docker/overlay2/35b939c7fd0daf3717995c2aff595f96a741b48ae2da6b523aeda782ea3922e9/diff:/var/lib/docker/overlay2/b5a01cc1c410e803d28949ef6f35b55ac04473d89beb188d9d4866287b7cbbee/diff:/var/lib/docker/overlay2/c26c0af38634a15c6619c42bd2e5ec804bab550ff8078c084ba220030d8f4b93/diff:/var/lib/docker/overlay2/c12adb9eba87b6903ac0b2e16234b6a4f11a66d10d30d5379b19963433b76506/diff:/var/lib/docker/overlay2/537ea8129185a2faaaafa08ee553e15fe2cee0
4e80dab99066f779573324b53c/diff:/var/lib/docker/overlay2/ba74848f80f8d422a61241b3778f2395a32e73958e6a6dfddf5724bd0367dc67/diff:/var/lib/docker/overlay2/be8013e1c023e08543e181408137e02941d2b05181428b80bf154108c0cf48a5/diff:/var/lib/docker/overlay2/895568f040b89c0f90e7f4e41a1a77ca025acd0a0e0682a242f830a2e9c4ede7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b1dbc0e0cd8ce08725be3f673171c349064c8c93ca6062426b455f1fdddd97dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b1dbc0e0cd8ce08725be3f673171c349064c8c93ca6062426b455f1fdddd97dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b1dbc0e0cd8ce08725be3f673171c349064c8c93ca6062426b455f1fdddd97dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220329172957-1328",
	                "Source": "/var/lib/docker/volumes/functional-20220329172957-1328/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220329172957-1328",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220329172957-1328",
	                "name.minikube.sigs.k8s.io": "functional-20220329172957-1328",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bcf2fddc8408c085c088e6a72857771b62bff3fc25976fbcaee1510e565aa00e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54458"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54459"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54460"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bcf2fddc8408",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220329172957-1328": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "fc92b10f555c",
	                        "functional-20220329172957-1328"
	                    ],
	                    "NetworkID": "0b6fab5225e6d6ea969a70010acd3c4584476ea9c109deb0b452771a96bbb0c0",
	                    "EndpointID": "c8b90169717752f36026740f5f8f9220910acd5d0f0ba214903ef2c852fe3e36",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220329172957-1328 -n functional-20220329172957-1328
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220329172957-1328 -n functional-20220329172957-1328: (4.1190935s)
helpers_test.go:245: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 logs -n 25: (6.835339s)
helpers_test.go:253: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------|--------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| Command |                                    Args                                     |            Profile             |       User        | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------------------------|--------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:35 GMT | Tue, 29 Mar 2022 17:35:38 GMT |
	|         | ssh sudo cat                                                                |                                |                   |         |                               |                               |
	|         | /usr/share/ca-certificates/13282.pem                                        |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:39 GMT | Tue, 29 Mar 2022 17:35:42 GMT |
	|         | ssh sudo cat                                                                |                                |                   |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                                                   |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328 image load --daemon                          | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:40 GMT | Tue, 29 Mar 2022 17:35:58 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220329172957-1328       |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:35:58 GMT | Tue, 29 Mar 2022 17:36:01 GMT |
	|         | image ls                                                                    |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328 image save                                   | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:01 GMT | Tue, 29 Mar 2022 17:36:13 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220329172957-1328       |                                |                   |         |                               |                               |
	|         | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar      |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328 image rm                                     | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:13 GMT | Tue, 29 Mar 2022 17:36:17 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220329172957-1328       |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:17 GMT | Tue, 29 Mar 2022 17:36:20 GMT |
	|         | image ls                                                                    |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328 image load                                   | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:21 GMT | Tue, 29 Mar 2022 17:36:29 GMT |
	|         | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar      |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:30 GMT | Tue, 29 Mar 2022 17:36:32 GMT |
	|         | image ls                                                                    |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328 image save --daemon                          | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:33 GMT | Tue, 29 Mar 2022 17:36:42 GMT |
	|         | gcr.io/google-containers/addon-resizer:functional-20220329172957-1328       |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:43 GMT | Tue, 29 Mar 2022 17:36:46 GMT |
	|         | cp testdata\cp-test.txt                                                     |                                |                   |         |                               |                               |
	|         | /home/docker/cp-test.txt                                                    |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:47 GMT | Tue, 29 Mar 2022 17:36:50 GMT |
	|         | ssh -n                                                                      |                                |                   |         |                               |                               |
	|         | functional-20220329172957-1328                                              |                                |                   |         |                               |                               |
	|         | sudo cat                                                                    |                                |                   |         |                               |                               |
	|         | /home/docker/cp-test.txt                                                    |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328 cp                                           | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:51 GMT | Tue, 29 Mar 2022 17:36:54 GMT |
	|         | functional-20220329172957-1328:/home/docker/cp-test.txt                     |                                |                   |         |                               |                               |
	|         | C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_test4014570281\cp-test.txt |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:54 GMT | Tue, 29 Mar 2022 17:36:58 GMT |
	|         | ssh -n                                                                      |                                |                   |         |                               |                               |
	|         | functional-20220329172957-1328                                              |                                |                   |         |                               |                               |
	|         | sudo cat                                                                    |                                |                   |         |                               |                               |
	|         | /home/docker/cp-test.txt                                                    |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:58 GMT | Tue, 29 Mar 2022 17:36:58 GMT |
	|         | version --short                                                             |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:36:59 GMT | Tue, 29 Mar 2022 17:37:03 GMT |
	|         | version -o=json --components                                                |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:02 GMT | Tue, 29 Mar 2022 17:37:05 GMT |
	|         | update-context                                                              |                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=2                                                      |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:03 GMT | Tue, 29 Mar 2022 17:37:06 GMT |
	|         | update-context                                                              |                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=2                                                      |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:05 GMT | Tue, 29 Mar 2022 17:37:08 GMT |
	|         | update-context                                                              |                                |                   |         |                               |                               |
	|         | --alsologtostderr -v=2                                                      |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:06 GMT | Tue, 29 Mar 2022 17:37:09 GMT |
	|         | image ls --format short                                                     |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:08 GMT | Tue, 29 Mar 2022 17:37:11 GMT |
	|         | image ls --format yaml                                                      |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:11 GMT | Tue, 29 Mar 2022 17:37:14 GMT |
	|         | image ls --format json                                                      |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:15 GMT | Tue, 29 Mar 2022 17:37:17 GMT |
	|         | image ls --format table                                                     |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328 image build -t                               | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:13 GMT | Tue, 29 Mar 2022 17:37:20 GMT |
	|         | localhost/my-image:functional-20220329172957-1328                           |                                |                   |         |                               |                               |
	|         | testdata\build                                                              |                                |                   |         |                               |                               |
	| -p      | functional-20220329172957-1328                                              | functional-20220329172957-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 17:37:20 GMT | Tue, 29 Mar 2022 17:37:23 GMT |
	|         | image ls                                                                    |                                |                   |         |                               |                               |
	|---------|-----------------------------------------------------------------------------|--------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:34:12
	Running on machine: minikube8
	Binary: Built with gc go1.17.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:34:12.223582    1888 out.go:297] Setting OutFile to fd 756 ...
	I0329 17:34:12.288586    1888 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:12.288586    1888 out.go:310] Setting ErrFile to fd 668...
	I0329 17:34:12.288586    1888 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:12.299583    1888 out.go:304] Setting JSON to false
	I0329 17:34:12.301582    1888 start.go:114] hostinfo: {"hostname":"minikube8","uptime":2449,"bootTime":1648572803,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 17:34:12.301582    1888 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 17:34:12.311584    1888 out.go:176] * [functional-20220329172957-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 17:34:12.315585    1888 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 17:34:12.317583    1888 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 17:34:12.320584    1888 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:34:12.322582    1888 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:34:12.323586    1888 config.go:176] Loaded profile config "functional-20220329172957-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:34:12.324581    1888 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:34:14.320589    1888 docker.go:137] docker version: linux-20.10.13
	I0329 17:34:14.333585    1888 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:15.037012    1888 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:57 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 17:34:14.7212757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:15.042993    1888 out.go:176] * Using the docker driver based on existing profile
	I0329 17:34:15.042993    1888 start.go:283] selected driver: docker
	I0329 17:34:15.042993    1888 start.go:800] validating driver "docker" against &{Name:functional-20220329172957-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329172957-1328 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:34:15.042993    1888 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 17:34:15.067008    1888 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:15.795006    1888 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 17:34:15.4512208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:15.842154    1888 cni.go:93] Creating CNI manager for ""
	I0329 17:34:15.842154    1888 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:34:15.842154    1888 start_flags.go:306] config:
	{Name:functional-20220329172957-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329172957-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false
volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:30:46 UTC, end at Tue 2022-03-29 18:10:07 UTC. --
	Mar 29 17:30:57 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:30:57.500438800Z" level=info msg="Daemon has completed initialization"
	Mar 29 17:30:57 functional-20220329172957-1328 systemd[1]: Started Docker Application Container Engine.
	Mar 29 17:30:57 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:30:57.559120800Z" level=info msg="API listen on [::]:2376"
	Mar 29 17:30:57 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:30:57.567777200Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 29 17:33:14 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:14.827672800Z" level=info msg="ignoring event" container=aff3292d13fe459be6cee14126b372912234ea57a2a9cfeb6c47bcd47f6088f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:14 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:14.829773600Z" level=info msg="ignoring event" container=13acdaae3280ad9b75dab3d58c52710eabb02190f2bc1ae5f56d5a35c1abfff1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:14 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:14.829861900Z" level=info msg="ignoring event" container=e3006264e73101a801d45be0d55c189c8125fa27977ada0e641a9d5a9a78565a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:14.940102100Z" level=info msg="ignoring event" container=e0da4aa3fa441e162c7cf28cc2109300462b71e21bd7155a98c42a34a04004fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:15.028554300Z" level=info msg="ignoring event" container=3e230eecc699cd3cdcd0dce8175d7ae0af4836b31c966504b012d8d49c3186ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:15.030456800Z" level=info msg="ignoring event" container=e718f52bd1bb2f82fcd98e6134aedc344eac791c13217afea2ff92a7e4c7ecc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:15.229683200Z" level=info msg="ignoring event" container=aac316136efec474d19e6476ffdba679285a4dce453fddbb4d706800a7957a0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:15.333161700Z" level=info msg="ignoring event" container=1b2af93b9c536e8faa4760abe312f0c5e9f732e2d0bd734d2e20b79afa93cdc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:15.338311800Z" level=info msg="ignoring event" container=3283aac9e9d458a07d1ba35ac03af511f34dfeef787d0bb9e7d2c9c21473e64c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:15.343870300Z" level=info msg="ignoring event" container=4f33a35c385f3d4689b15ea73460a49f54764cde9570c2b59ad6c479a2084039 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:15 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:15.532022600Z" level=info msg="ignoring event" container=b9f767826891f6afbfd6b16c4d92810ed49341cb55ba5fb9a4c2831690c62491 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:16 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:16.338493000Z" level=info msg="ignoring event" container=fa21a26c2ee61ed4ccc43e58faea686641957d2a52b38cbccf90fdf2bfec15d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:16 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:16.509202700Z" level=info msg="ignoring event" container=58b93fed034c508808b421092c08842c0dad15bbaf430b32a976361761c9d108 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:19 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:19.527256000Z" level=info msg="ignoring event" container=5f47160e5e671e9ee38fc01b6bce5469c44e1ee0771c3f0b1219e447242e416b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:20 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:20.230113100Z" level=info msg="ignoring event" container=b113774cb04bf873023a6116637d05a775f44268dce84d71409d12bc45747eed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:29 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:29.933650900Z" level=info msg="ignoring event" container=1f2673eda831911b0c63449dd23d6f30ec65fcbff6850b41929bf746aa8004a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:33:30 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:33:30.049772300Z" level=info msg="ignoring event" container=433f316f94292dc3bb8a556a3f8f3bf8992ed8ee564764060ce8af35e3d3cdbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:03 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:35:03.283040400Z" level=info msg="ignoring event" container=f6f8caac6d68b92fe20428e8d9dfee313a711227c38bd261908f0fa7746fab7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:35:03 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:35:03.514679600Z" level=info msg="ignoring event" container=013c2da102363b177b6c27e102cb4ad9d36edfcf8311da6a562abadb6e65c028 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:37:19 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:37:19.126694800Z" level=info msg="ignoring event" container=f110e9e604d9d163c381558fc2c569e5e393454deab62849400c7d3382d7d45e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:37:19 functional-20220329172957-1328 dockerd[472]: time="2022-03-29T17:37:19.748127600Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	7c05f62e576d8       mysql@sha256:c8f68301981a7224cc9c063fc7a97b6ef13cfc4142b4871d1a35c95777ce96f4                   33 minutes ago      Running             mysql                     0                   de3ff2eeaba67
	b9ee1273208c2       nginx@sha256:e48e9d28dd773886b5c4b86db4e411eedf46bb98095f5c03c3f6a167a633dcf0                   35 minutes ago      Running             myfrontend                0                   094554dc798e6
	8a011d36f0451       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   35 minutes ago      Running             echoserver                0                   ac757c608a78d
	a62edced389d9       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   35 minutes ago      Running             echoserver                0                   1ca1f63e4933c
	eaf169657220c       nginx@sha256:db7973cb238c8e8acea5982c1048b5987e9e4da60d20daeef7301757de97357a                   35 minutes ago      Running             nginx                     0                   a7733af0b405e
	ceafdeb9732eb       a4ca41631cc7a                                                                                   36 minutes ago      Running             coredns                   1                   8fee99e7b966b
	010f8c02c981e       6e38f40d628db                                                                                   36 minutes ago      Running             storage-provisioner       2                   c607a5f879086
	f79ac29cd4f48       3fc1d62d65872                                                                                   36 minutes ago      Running             kube-apiserver            0                   e224238d7375e
	5f47160e5e671       6e38f40d628db                                                                                   36 minutes ago      Exited              storage-provisioner       1                   c607a5f879086
	73b06757d8b61       884d49d6d8c9f                                                                                   36 minutes ago      Running             kube-scheduler            1                   90c1ab0191c95
	b17cf7ddc9ae3       25f8c7f3da61c                                                                                   36 minutes ago      Running             etcd                      1                   4a21990cc7911
	73c61e2aceded       b0c9e5e4dbb14                                                                                   36 minutes ago      Running             kube-controller-manager   1                   e66aa1ed9e52c
	dd1502fa70ee1       3c53fa8541f95                                                                                   36 minutes ago      Running             kube-proxy                1                   19ce0d94c467f
	b113774cb04bf       a4ca41631cc7a                                                                                   38 minutes ago      Exited              coredns                   0                   e0da4aa3fa441
	3283aac9e9d45       3c53fa8541f95                                                                                   38 minutes ago      Exited              kube-proxy                0                   aff3292d13fe4
	58b93fed034c5       884d49d6d8c9f                                                                                   38 minutes ago      Exited              kube-scheduler            0                   aac316136efec
	b9f767826891f       b0c9e5e4dbb14                                                                                   38 minutes ago      Exited              kube-controller-manager   0                   1b2af93b9c536
	4f33a35c385f3       25f8c7f3da61c                                                                                   38 minutes ago      Exited              etcd                      0                   3e230eecc699c
	
	* 
	* ==> coredns [b113774cb04b] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [ceafdeb9732e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220329172957-1328
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220329172957-1328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=functional-20220329172957-1328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T17_31_26_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:31:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220329172957-1328
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 18:10:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 18:08:20 +0000   Tue, 29 Mar 2022 17:31:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 18:08:20 +0000   Tue, 29 Mar 2022 17:31:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 18:08:20 +0000   Tue, 29 Mar 2022 17:31:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 18:08:20 +0000   Tue, 29 Mar 2022 17:33:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220329172957-1328
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                140a143b31184b58be947b52a01fff83
	  Boot ID:                    c6888bb0-0d7a-4902-95ce-20313bf24adc
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-fbwng                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     hello-node-connect-74cf8bc446-9gn2c                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     mysql-b87c45988-p2gxr                                     600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     34m
	  default                     nginx-svc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     sp-pod                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  kube-system                 coredns-64897985d-hkk6g                                   100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-20220329172957-1328                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-20220329172957-1328             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36m
	  kube-system                 kube-controller-manager-functional-20220329172957-1328    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-lbxsz                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-20220329172957-1328             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 38m                kube-proxy  
	  Normal  Starting                 36m                kube-proxy  
	  Normal  NodeHasNoDiskPressure    38m (x7 over 38m)  kubelet     Node functional-20220329172957-1328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m (x7 over 38m)  kubelet     Node functional-20220329172957-1328 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m (x7 over 38m)  kubelet     Node functional-20220329172957-1328 status is now: NodeHasSufficientMemory
	  Normal  Starting                 38m                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    38m                kubelet     Node functional-20220329172957-1328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet     Node functional-20220329172957-1328 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  38m                kubelet     Node functional-20220329172957-1328 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                38m                kubelet     Node functional-20220329172957-1328 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  36m                kubelet     Node functional-20220329172957-1328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36m                kubelet     Node functional-20220329172957-1328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36m                kubelet     Node functional-20220329172957-1328 status is now: NodeHasSufficientPID
	  Normal  Starting                 36m                kubelet     Starting kubelet.
	  Normal  NodeNotReady             36m                kubelet     Node functional-20220329172957-1328 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  36m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                36m                kubelet     Node functional-20220329172957-1328 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Mar29 17:45] WSL2: Performing memory compaction.
	[Mar29 17:46] WSL2: Performing memory compaction.
	[Mar29 17:47] WSL2: Performing memory compaction.
	[Mar29 17:48] WSL2: Performing memory compaction.
	[Mar29 17:49] WSL2: Performing memory compaction.
	[Mar29 17:50] WSL2: Performing memory compaction.
	[Mar29 17:51] WSL2: Performing memory compaction.
	[Mar29 17:52] WSL2: Performing memory compaction.
	[Mar29 17:53] WSL2: Performing memory compaction.
	[Mar29 17:54] WSL2: Performing memory compaction.
	[Mar29 17:55] WSL2: Performing memory compaction.
	[Mar29 17:56] WSL2: Performing memory compaction.
	[Mar29 17:57] WSL2: Performing memory compaction.
	[Mar29 17:58] WSL2: Performing memory compaction.
	[Mar29 17:59] WSL2: Performing memory compaction.
	[Mar29 18:00] WSL2: Performing memory compaction.
	[Mar29 18:01] WSL2: Performing memory compaction.
	[Mar29 18:02] WSL2: Performing memory compaction.
	[Mar29 18:03] WSL2: Performing memory compaction.
	[Mar29 18:04] WSL2: Performing memory compaction.
	[Mar29 18:05] WSL2: Performing memory compaction.
	[Mar29 18:06] WSL2: Performing memory compaction.
	[Mar29 18:07] WSL2: Performing memory compaction.
	[Mar29 18:08] WSL2: Performing memory compaction.
	[Mar29 18:09] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [4f33a35c385f] <==
	* {"level":"info","ts":"2022-03-29T17:31:38.447Z","caller":"traceutil/trace.go:171","msg":"trace[2024880983] range","detail":"{range_begin:/registry/serviceaccounts/kube-public/default; range_end:; response_count:1; response_revision:411; }","duration":"112.89ms","start":"2022-03-29T17:31:38.334Z","end":"2022-03-29T17:31:38.447Z","steps":["trace[2024880983] 'agreement among raft nodes before linearized reading'  (duration: 112.8076ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:31:38.448Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.5883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-functional-20220329172957-1328\" ","response":"range_response_count:1 size:5330"}
	{"level":"info","ts":"2022-03-29T17:31:38.448Z","caller":"traceutil/trace.go:171","msg":"trace[1227277695] range","detail":"{range_begin:/registry/pods/kube-system/etcd-functional-20220329172957-1328; range_end:; response_count:1; response_revision:411; }","duration":"107.6247ms","start":"2022-03-29T17:31:38.340Z","end":"2022-03-29T17:31:38.448Z","steps":["trace[1227277695] 'agreement among raft nodes before linearized reading'  (duration: 107.5622ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:31:38.448Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2022-03-29T17:31:38.448Z","caller":"traceutil/trace.go:171","msg":"trace[1577845522] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:411; }","duration":"112.0065ms","start":"2022-03-29T17:31:38.336Z","end":"2022-03-29T17:31:38.448Z","steps":["trace[1577845522] 'agreement among raft nodes before linearized reading'  (duration: 111.8724ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T17:31:38.652Z","caller":"traceutil/trace.go:171","msg":"trace[1237334407] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"115.8504ms","start":"2022-03-29T17:31:38.536Z","end":"2022-03-29T17:31:38.652Z","steps":["trace[1237334407] 'process raft request'  (duration: 91.5662ms)","trace[1237334407] 'compare'  (duration: 23.9347ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T17:31:38.652Z","caller":"traceutil/trace.go:171","msg":"trace[1238529072] linearizableReadLoop","detail":"{readStateIndex:428; appliedIndex:427; }","duration":"115.6224ms","start":"2022-03-29T17:31:38.537Z","end":"2022-03-29T17:31:38.652Z","steps":["trace[1238529072] 'read index received'  (duration: 90.5167ms)","trace[1238529072] 'applied index is now lower than readState.Index'  (duration: 25.1005ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:31:38.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"117.0019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-20220329172957-1328\" ","response":"range_response_count:1 size:4791"}
	{"level":"warn","ts":"2022-03-29T17:31:38.653Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.0911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3571"}
	{"level":"info","ts":"2022-03-29T17:31:38.653Z","caller":"traceutil/trace.go:171","msg":"trace[437068388] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:425; }","duration":"118.1435ms","start":"2022-03-29T17:31:38.535Z","end":"2022-03-29T17:31:38.653Z","steps":["trace[437068388] 'agreement among raft nodes before linearized reading'  (duration: 118.0485ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T17:31:38.653Z","caller":"traceutil/trace.go:171","msg":"trace[1294725088] range","detail":"{range_begin:/registry/minions/functional-20220329172957-1328; range_end:; response_count:1; response_revision:425; }","duration":"117.076ms","start":"2022-03-29T17:31:38.536Z","end":"2022-03-29T17:31:38.653Z","steps":["trace[1294725088] 'agreement among raft nodes before linearized reading'  (duration: 116.9596ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:31:38.929Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.4526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/kube-system/kube-dns-d6lcr\" ","response":"range_response_count:1 size:912"}
	{"level":"info","ts":"2022-03-29T17:31:38.929Z","caller":"traceutil/trace.go:171","msg":"trace[770675310] range","detail":"{range_begin:/registry/endpointslices/kube-system/kube-dns-d6lcr; range_end:; response_count:1; response_revision:437; }","duration":"100.6155ms","start":"2022-03-29T17:31:38.828Z","end":"2022-03-29T17:31:38.929Z","steps":["trace[770675310] 'range keys from in-memory index tree'  (duration: 100.2289ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:31:39.061Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.0944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3921"}
	{"level":"info","ts":"2022-03-29T17:31:39.061Z","caller":"traceutil/trace.go:171","msg":"trace[2094379679] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:446; }","duration":"118.2662ms","start":"2022-03-29T17:31:38.943Z","end":"2022-03-29T17:31:39.061Z","steps":["trace[2094379679] 'agreement among raft nodes before linearized reading'  (duration: 96.5876ms)","trace[2094379679] 'range keys from in-memory index tree'  (duration: 21.4798ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:31:43.332Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.5546ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128011987739375201 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-64897985d-hkk6g.16e0ea5261360584\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-64897985d-hkk6g.16e0ea5261360584\" value_size:558 lease:8128011987739374676 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-03-29T17:31:43.332Z","caller":"traceutil/trace.go:171","msg":"trace[812545278] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"102.4262ms","start":"2022-03-29T17:31:43.230Z","end":"2022-03-29T17:31:43.332Z","steps":["trace[812545278] 'compare'  (duration: 101.263ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T17:33:14.928Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-03-29T17:33:14.928Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220329172957-1328","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/03/29 17:33:14 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/03/29 17:33:15 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-03-29T17:33:15.028Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-03-29T17:33:15.037Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:33:15.039Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:33:15.039Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220329172957-1328","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [b17cf7ddc9ae] <==
	* {"level":"warn","ts":"2022-03-29T17:36:09.648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T17:36:09.249Z","time spent":"398.5982ms","remote":"127.0.0.1:58216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-03-29T17:36:09.648Z","caller":"traceutil/trace.go:171","msg":"trace[984172339] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:849; }","duration":"2.0905042s","start":"2022-03-29T17:36:07.557Z","end":"2022-03-29T17:36:09.648Z","steps":["trace[984172339] 'agreement among raft nodes before linearized reading'  (duration: 2.0900414s)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:36:09.648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T17:36:07.557Z","time spent":"2.09058s","remote":"127.0.0.1:58118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1156,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2022-03-29T17:36:18.649Z","caller":"traceutil/trace.go:171","msg":"trace[1366229735] linearizableReadLoop","detail":"{readStateIndex:944; appliedIndex:944; }","duration":"408.4102ms","start":"2022-03-29T17:36:18.241Z","end":"2022-03-29T17:36:18.649Z","steps":["trace[1366229735] 'read index received'  (duration: 408.3993ms)","trace[1366229735] 'applied index is now lower than readState.Index'  (duration: 7.6µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:36:18.651Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"410.1378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T17:36:18.651Z","caller":"traceutil/trace.go:171","msg":"trace[1107109752] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:856; }","duration":"410.4315ms","start":"2022-03-29T17:36:18.241Z","end":"2022-03-29T17:36:18.651Z","steps":["trace[1107109752] 'agreement among raft nodes before linearized reading'  (duration: 408.5582ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:36:18.651Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T17:36:18.241Z","time spent":"410.5093ms","remote":"127.0.0.1:58216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-03-29T17:36:27.160Z","caller":"traceutil/trace.go:171","msg":"trace[2033380535] linearizableReadLoop","detail":"{readStateIndex:951; appliedIndex:951; }","duration":"293.4885ms","start":"2022-03-29T17:36:26.867Z","end":"2022-03-29T17:36:27.160Z","steps":["trace[2033380535] 'read index received'  (duration: 293.4785ms)","trace[2033380535] 'applied index is now lower than readState.Index'  (duration: 7µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T17:36:27.160Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"293.7104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"info","ts":"2022-03-29T17:36:27.160Z","caller":"traceutil/trace.go:171","msg":"trace[301432720] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:862; }","duration":"293.7798ms","start":"2022-03-29T17:36:26.867Z","end":"2022-03-29T17:36:27.160Z","steps":["trace[301432720] 'agreement among raft nodes before linearized reading'  (duration: 293.6388ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T17:36:27.160Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"210.2698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:13610"}
	{"level":"info","ts":"2022-03-29T17:36:27.160Z","caller":"traceutil/trace.go:171","msg":"trace[853826466] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:862; }","duration":"210.3132ms","start":"2022-03-29T17:36:26.950Z","end":"2022-03-29T17:36:27.160Z","steps":["trace[853826466] 'agreement among raft nodes before linearized reading'  (duration: 210.2108ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T17:43:31.918Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":960}
	{"level":"info","ts":"2022-03-29T17:43:31.920Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":960,"took":"1.5778ms"}
	{"level":"info","ts":"2022-03-29T17:48:31.932Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1170}
	{"level":"info","ts":"2022-03-29T17:48:31.934Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1170,"took":"901.7µs"}
	{"level":"info","ts":"2022-03-29T17:53:31.957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1380}
	{"level":"info","ts":"2022-03-29T17:53:31.958Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1380,"took":"678.1µs"}
	{"level":"info","ts":"2022-03-29T17:54:47.065Z","caller":"traceutil/trace.go:171","msg":"trace[214672656] transaction","detail":"{read_only:false; response_revision:1642; number_of_response:1; }","duration":"103.1525ms","start":"2022-03-29T17:54:46.961Z","end":"2022-03-29T17:54:47.064Z","steps":["trace[214672656] 'process raft request'  (duration: 86.1894ms)","trace[214672656] 'compare'  (duration: 16.7563ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T17:58:31.982Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1589}
	{"level":"info","ts":"2022-03-29T17:58:31.984Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1589,"took":"796.7µs"}
	{"level":"info","ts":"2022-03-29T18:03:31.996Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1800}
	{"level":"info","ts":"2022-03-29T18:03:31.997Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1800,"took":"543.7µs"}
	{"level":"info","ts":"2022-03-29T18:08:32.010Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2009}
	{"level":"info","ts":"2022-03-29T18:08:32.011Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2009,"took":"596.7µs"}
	
	* 
	* ==> kernel <==
	*  18:10:09 up 59 min,  0 users,  load average: 0.46, 0.33, 0.41
	Linux functional-20220329172957-1328 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f79ac29cd4f4] <==
	* Trace[280241056]: ---"Listing from storage done" 560ms (17:34:55.809)
	Trace[280241056]: [561.1793ms] [561.1793ms] END
	I0329 17:35:43.861626       1 trace.go:205] Trace[1302803217]: "List etcd3" key:/resourcequotas/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (29-Mar-2022 17:35:43.344) (total time: 517ms):
	Trace[1302803217]: [517.2774ms] [517.2774ms] END
	I0329 17:35:43.861852       1 trace.go:205] Trace[2114647455]: "List" url:/api/v1/namespaces/default/resourcequotas,user-agent:Go-http-client/2.0,audit-id:a1e08c7c-a749-4465-acdb-7bf9c08f0599,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (29-Mar-2022 17:35:43.344) (total time: 517ms):
	Trace[2114647455]: ---"Listing from storage done" 517ms (17:35:43.861)
	Trace[2114647455]: [517.5439ms] [517.5439ms] END
	I0329 17:35:43.869789       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.102.115.54]
	I0329 17:35:43.870778       1 trace.go:205] Trace[1785157066]: "Create" url:/api/v1/namespaces/default/services,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:8a0caa5d-4aea-4eab-bf96-7b57e7802ab3,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (29-Mar-2022 17:35:43.327) (total time: 543ms):
	Trace[1785157066]: ---"Object stored in database" 542ms (17:35:43.870)
	Trace[1785157066]: [543.4243ms] [543.4243ms] END
	{"level":"warn","ts":"2022-03-29T17:36:09.238Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0022708c0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I0329 17:36:09.643973       1 trace.go:205] Trace[1392010626]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (29-Mar-2022 17:36:06.859) (total time: 2784ms):
	Trace[1392010626]: ---"Transaction committed" 2714ms (17:36:09.643)
	Trace[1392010626]: [2.7841077s] [2.7841077s] END
	I0329 17:36:09.645177       1 trace.go:205] Trace[1086923259]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (29-Mar-2022 17:36:06.955) (total time: 2689ms):
	Trace[1086923259]: [2.6899114s] [2.6899114s] END
	I0329 17:36:09.645942       1 trace.go:205] Trace[1333620312]: "List" url:/api/v1/namespaces/default/pods,user-agent:Go-http-client/2.0,audit-id:9e6ee599-3540-4bb7-91b5-b25aa2a4c4ad,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (29-Mar-2022 17:36:06.955) (total time: 2690ms):
	Trace[1333620312]: ---"Listing from storage done" 2690ms (17:36:09.645)
	Trace[1333620312]: [2.6907771s] [2.6907771s] END
	I0329 17:36:09.649687       1 trace.go:205] Trace[120194606]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7839dad5-2118-4bd9-9b80-6827ffe4f657,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (29-Mar-2022 17:36:07.556) (total time: 2092ms):
	Trace[120194606]: ---"About to write a response" 2092ms (17:36:09.649)
	Trace[120194606]: [2.0929378s] [2.0929378s] END
	W0329 17:51:24.846054       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0329 18:03:35.874676       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	
	* 
	* ==> kube-controller-manager [73c61e2acede] <==
	* I0329 17:33:48.742192       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0329 17:33:48.747151       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0329 17:33:48.827953       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0329 17:33:48.836408       1 shared_informer.go:247] Caches are synced for taint 
	I0329 17:33:48.836532       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0329 17:33:48.836586       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0329 17:33:48.836604       1 node_lifecycle_controller.go:1012] Missing timestamp for Node functional-20220329172957-1328. Assuming now as a timestamp.
	I0329 17:33:48.836656       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0329 17:33:48.836731       1 event.go:294] "Event occurred" object="functional-20220329172957-1328" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220329172957-1328 event: Registered Node functional-20220329172957-1328 in Controller"
	I0329 17:33:48.839814       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0329 17:33:48.851619       1 shared_informer.go:247] Caches are synced for attach detach 
	I0329 17:33:48.927577       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:33:48.927663       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:33:48.927697       1 shared_informer.go:247] Caches are synced for disruption 
	I0329 17:33:48.927729       1 disruption.go:371] Sending events to api server.
	I0329 17:33:49.341790       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:33:49.349471       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:33:49.349606       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 17:34:19.513146       1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
	I0329 17:34:19.554100       1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-9gn2c"
	I0329 17:34:24.545209       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0329 17:34:32.949600       1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
	I0329 17:34:33.042156       1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-fbwng"
	I0329 17:35:43.950398       1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
	I0329 17:35:44.063683       1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-p2gxr"
	
	* 
	* ==> kube-controller-manager [b9f767826891] <==
	* I0329 17:31:37.837248       1 shared_informer.go:247] Caches are synced for HPA 
	I0329 17:31:37.927878       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-functional-20220329172957-1328" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0329 17:31:37.928072       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:31:37.928108       1 shared_informer.go:247] Caches are synced for stateful set 
	I0329 17:31:37.928139       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0329 17:31:37.928360       1 shared_informer.go:247] Caches are synced for disruption 
	I0329 17:31:37.928375       1 disruption.go:371] Sending events to api server.
	I0329 17:31:37.928640       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0329 17:31:38.028151       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:31:38.127869       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0329 17:31:38.131067       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-functional-20220329172957-1328" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0329 17:31:38.131377       1 event.go:294] "Event occurred" object="kube-system/etcd-functional-20220329172957-1328" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0329 17:31:38.131521       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-functional-20220329172957-1328" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0329 17:31:38.234114       1 range_allocator.go:374] Set node functional-20220329172957-1328 PodCIDR to [10.244.0.0/24]
	I0329 17:31:38.333819       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0329 17:31:38.429307       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:31:38.430901       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 17:31:38.430950       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:31:38.656120       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-lsxfx"
	I0329 17:31:38.656161       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lbxsz"
	I0329 17:31:38.730466       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-hkk6g"
	I0329 17:31:39.157068       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0329 17:31:39.264331       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-lsxfx"
	I0329 17:31:43.153647       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0329 17:31:43.157408       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d-hkk6g" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-64897985d-hkk6g"
	
	* 
	* ==> kube-proxy [3283aac9e9d4] <==
	* E0329 17:31:41.533540       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0329 17:31:41.540768       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0329 17:31:41.543818       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0329 17:31:41.547533       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0329 17:31:41.552338       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0329 17:31:41.626745       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0329 17:31:41.729779       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:31:41.730264       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:31:41.730509       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:31:42.029722       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:31:42.029851       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:31:42.029872       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:31:42.029906       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:31:42.030925       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:31:42.032085       1 config.go:317] "Starting service config controller"
	I0329 17:31:42.032227       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:31:42.032432       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:31:42.032459       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:31:42.132633       1 shared_informer.go:247] Caches are synced for service config 
	I0329 17:31:42.132787       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [dd1502fa70ee] <==
	* E0329 17:33:20.132463       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0329 17:33:20.137453       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0329 17:33:20.227156       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0329 17:33:20.231214       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0329 17:33:20.234666       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0329 17:33:20.238521       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0329 17:33:20.329895       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329172957-1328": dial tcp 192.168.49.2:8441: connect: connection refused
	E0329 17:33:28.427555       1 node.go:152] Failed to retrieve node info: nodes "functional-20220329172957-1328" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	E0329 17:33:30.547761       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329172957-1328": dial tcp 192.168.49.2:8441: connect: connection refused
	E0329 17:33:35.339211       1 node.go:152] Failed to retrieve node info: nodes "functional-20220329172957-1328" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	I0329 17:33:44.466372       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:33:44.466525       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:33:44.466655       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:33:44.539005       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:33:44.539076       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:33:44.539089       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:33:44.539109       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:33:44.539833       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:33:44.540586       1 config.go:317] "Starting service config controller"
	I0329 17:33:44.540725       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:33:44.540974       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:33:44.540988       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:33:44.642063       1 shared_informer.go:247] Caches are synced for service config 
	I0329 17:33:44.642139       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [58b93fed034c] <==
	* E0329 17:31:21.862282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 17:31:21.864569       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0329 17:31:21.864667       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0329 17:31:21.878745       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0329 17:31:21.878857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0329 17:31:22.076839       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0329 17:31:22.076957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0329 17:31:22.093489       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0329 17:31:22.093605       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0329 17:31:22.109878       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 17:31:22.110001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 17:31:22.166652       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:31:22.166773       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:31:22.174085       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:31:22.174205       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:31:22.229187       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0329 17:31:22.229312       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0329 17:31:22.229326       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0329 17:31:22.229366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0329 17:31:22.250890       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 17:31:22.251003       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0329 17:31:24.530869       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0329 17:33:14.928351       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0329 17:33:14.928824       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0329 17:33:14.929185       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [73b06757d8b6] <==
	* W0329 17:33:28.427510       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0329 17:33:28.427668       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 17:33:28.427687       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0329 17:33:28.427700       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0329 17:33:28.532775       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.5"
	I0329 17:33:28.535071       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0329 17:33:28.536123       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0329 17:33:28.536378       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0329 17:33:28.536404       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0329 17:33:28.636960       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0329 17:33:35.139203       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0329 17:33:35.139403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0329 17:33:35.139446       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0329 17:33:35.139475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0329 17:33:35.139597       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0329 17:33:35.139742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0329 17:33:35.139877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0329 17:33:35.327054       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0329 17:33:35.331041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0329 17:33:35.331197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0329 17:33:35.331364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0329 17:33:35.331485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0329 17:33:35.331666       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0329 17:33:35.331676       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0329 17:33:35.331735       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:30:46 UTC, end at Tue 2022-03-29 18:10:10 UTC. --
	Mar 29 17:35:04 functional-20220329172957-1328 kubelet[5921]: E0329 17:35:04.288591    5921 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: f6f8caac6d68b92fe20428e8d9dfee313a711227c38bd261908f0fa7746fab7f" containerID="f6f8caac6d68b92fe20428e8d9dfee313a711227c38bd261908f0fa7746fab7f"
	Mar 29 17:35:04 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:04.288915    5921 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:f6f8caac6d68b92fe20428e8d9dfee313a711227c38bd261908f0fa7746fab7f} err="failed to get container status \"f6f8caac6d68b92fe20428e8d9dfee313a711227c38bd261908f0fa7746fab7f\": rpc error: code = Unknown desc = Error: No such container: f6f8caac6d68b92fe20428e8d9dfee313a711227c38bd261908f0fa7746fab7f"
	Mar 29 17:35:04 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:04.820631    5921 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:35:04 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:04.924203    5921 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8651ceaf-5db5-4c3f-b370-19daa78d363e\" (UniqueName: \"kubernetes.io/host-path/22556e39-7908-44a3-bab6-4201cde99f79-pvc-8651ceaf-5db5-4c3f-b370-19daa78d363e\") pod \"sp-pod\" (UID: \"22556e39-7908-44a3-bab6-4201cde99f79\") " pod="default/sp-pod"
	Mar 29 17:35:04 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:04.924386    5921 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7x8w\" (UniqueName: \"kubernetes.io/projected/22556e39-7908-44a3-bab6-4201cde99f79-kube-api-access-h7x8w\") pod \"sp-pod\" (UID: \"22556e39-7908-44a3-bab6-4201cde99f79\") " pod="default/sp-pod"
	Mar 29 17:35:05 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:05.675654    5921 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3d39a4d8-25ef-40c8-aeee-26299d40c667 path="/var/lib/kubelet/pods/3d39a4d8-25ef-40c8-aeee-26299d40c667/volumes"
	Mar 29 17:35:06 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:06.124466    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Mar 29 17:35:06 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:06.134605    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Mar 29 17:35:06 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:06.139799    5921 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="094554dc798e6667c83eaaad1a106c76ee34be548b7ed8e861637f722b1d8f1a"
	Mar 29 17:35:07 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:07.165137    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Mar 29 17:35:08 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:08.213124    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Mar 29 17:35:44 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:44.130486    5921 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 17:35:44 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:44.332252    5921 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2bh\" (UniqueName: \"kubernetes.io/projected/4866b3db-09ff-453d-85ba-cd7df98a719d-kube-api-access-qj2bh\") pod \"mysql-b87c45988-p2gxr\" (UID: \"4866b3db-09ff-453d-85ba-cd7df98a719d\") " pod="default/mysql-b87c45988-p2gxr"
	Mar 29 17:35:45 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:45.429212    5921 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="de3ff2eeaba6704b6af59429353c1b6b4cac5b8d40809e57a984476585fb5537"
	Mar 29 17:35:45 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:45.429311    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-p2gxr through plugin: invalid network status for"
	Mar 29 17:35:46 functional-20220329172957-1328 kubelet[5921]: I0329 17:35:46.447718    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-p2gxr through plugin: invalid network status for"
	Mar 29 17:36:27 functional-20220329172957-1328 kubelet[5921]: I0329 17:36:27.849647    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-p2gxr through plugin: invalid network status for"
	Mar 29 17:36:28 functional-20220329172957-1328 kubelet[5921]: I0329 17:36:28.937651    5921 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-p2gxr through plugin: invalid network status for"
	Mar 29 17:38:28 functional-20220329172957-1328 kubelet[5921]: W0329 17:38:28.948892    5921 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Mar 29 17:43:28 functional-20220329172957-1328 kubelet[5921]: W0329 17:43:28.949114    5921 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Mar 29 17:48:28 functional-20220329172957-1328 kubelet[5921]: W0329 17:48:28.949344    5921 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Mar 29 17:53:28 functional-20220329172957-1328 kubelet[5921]: W0329 17:53:28.952002    5921 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Mar 29 17:58:28 functional-20220329172957-1328 kubelet[5921]: W0329 17:58:28.948050    5921 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Mar 29 18:03:28 functional-20220329172957-1328 kubelet[5921]: W0329 18:03:28.947435    5921 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Mar 29 18:08:28 functional-20220329172957-1328 kubelet[5921]: W0329 18:08:28.947664    5921 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [010f8c02c981] <==
	* I0329 17:33:38.050610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:33:38.142541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:33:38.142602       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0329 17:33:55.740110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0329 17:33:55.740611       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220329172957-1328_b5ef7fae-f5c0-4037-a559-565ba8d0d423!
	I0329 17:33:55.740585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80de22e3-9758-4591-a399-e5b3fb682437", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220329172957-1328_b5ef7fae-f5c0-4037-a559-565ba8d0d423 became leader
	I0329 17:33:55.841707       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220329172957-1328_b5ef7fae-f5c0-4037-a559-565ba8d0d423!
	I0329 17:34:24.544974       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0329 17:34:24.545396       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    12368d55-eaa3-4583-8ca4-adde53b1c69d 478 0 2022-03-29 17:31:44 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-03-29 17:31:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-8651ceaf-5db5-4c3f-b370-19daa78d363e &PersistentVolumeClaim{ObjectMeta:{myclaim  default  8651ceaf-5db5-4c3f-b370-19daa78d363e 682 0 2022-03-29 17:34:24 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-03-29 17:34:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-03-29 17:34:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0329 17:34:24.545986       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8651ceaf-5db5-4c3f-b370-19daa78d363e", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0329 17:34:24.546445       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-8651ceaf-5db5-4c3f-b370-19daa78d363e" provisioned
	I0329 17:34:24.548371       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0329 17:34:24.550058       1 volume_store.go:212] Trying to save persistentvolume "pvc-8651ceaf-5db5-4c3f-b370-19daa78d363e"
	I0329 17:34:24.569742       1 volume_store.go:219] persistentvolume "pvc-8651ceaf-5db5-4c3f-b370-19daa78d363e" saved
	I0329 17:34:24.570112       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8651ceaf-5db5-4c3f-b370-19daa78d363e", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-8651ceaf-5db5-4c3f-b370-19daa78d363e
	
	* 
	* ==> storage-provisioner [5f47160e5e67] <==
	* I0329 17:33:19.133849       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0329 17:33:19.141758       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220329172957-1328 -n functional-20220329172957-1328
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220329172957-1328 -n functional-20220329172957-1328: (4.1064381s)
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20220329172957-1328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20220329172957-1328 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20220329172957-1328 describe pod : exit status 1 (234.9202ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context functional-20220329172957-1328 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2142.84s)

                                                
                                    
x
+
TestSkaffold (136.66s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\skaffold.exe18927593 version
skaffold_test.go:61: skaffold version: v1.37.0
skaffold_test.go:64: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20220329185334-1328 --memory=2600 --driver=docker
E0329 18:54:13.481944    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:54:16.206258    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
skaffold_test.go:64: (dbg) Done: out/minikube-windows-amd64.exe start -p skaffold-20220329185334-1328 --memory=2600 --driver=docker: (1m36.4879825s)
skaffold_test.go:84: copying out/minikube-windows-amd64.exe to C:\jenkins\workspace\Docker_Windows_integration\out\minikube.exe
skaffold_test.go:108: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\skaffold.exe18927593 run --minikube-profile skaffold-20220329185334-1328 --kube-context skaffold-20220329185334-1328 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Non-zero exit: C:\Users\jenkins.minikube8\AppData\Local\Temp\skaffold.exe18927593 run --minikube-profile skaffold-20220329185334-1328 --kube-context skaffold-20220329185334-1328 --status-check=true --port-forward=false --interactive=false: exit status 1 (12.1929642s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-20220329185334-1328] context, using local docker daemon.
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#1 [internal] load build definition from Dockerfile
	#1 sha256:90e8c0ce4c57f1d49adce4845533ad5647a57ef5cec198683698f9f56702fbc1
	#1 transferring dockerfile: 345B 0.0s done
	#1 DONE 0.5s
	
	#2 [internal] load .dockerignore
	#2 sha256:6acd6502e1f7d9d762410d2141f3e8b5aefc884591fb95a6c0249fbbda2a2939
	#2 transferring context: 2B 0.0s done
	#2 DONE 0.3s
	
	#3 [internal] load metadata for docker.io/library/alpine:3.10
	#3 sha256:ac8c9d4b8fc421ddf809bac2b79af6ebec0aa591815b5d2abf229ccdfba18d01
	#3 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	------
	 > [internal] load metadata for docker.io/library/alpine:3.10:
	------
	------
	 > [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10:
	------
	failed to solve with frontend dockerfile.v0: failed to create LLB definition: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	Build [leeroy-web] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-app] failed: exit status 1. Docker build ran into internal error. Please retry.
	If this keeps happening, please open an issue..

                                                
                                                
** /stderr **
skaffold_test.go:110: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-20220329185334-1328] context, using local docker daemon.
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#1 [internal] load build definition from Dockerfile
	#1 sha256:90e8c0ce4c57f1d49adce4845533ad5647a57ef5cec198683698f9f56702fbc1
	#1 transferring dockerfile: 345B 0.0s done
	#1 DONE 0.5s
	
	#2 [internal] load .dockerignore
	#2 sha256:6acd6502e1f7d9d762410d2141f3e8b5aefc884591fb95a6c0249fbbda2a2939
	#2 transferring context: 2B 0.0s done
	#2 DONE 0.3s
	
	#3 [internal] load metadata for docker.io/library/alpine:3.10
	#3 sha256:ac8c9d4b8fc421ddf809bac2b79af6ebec0aa591815b5d2abf229ccdfba18d01
	#3 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	------
	 > [internal] load metadata for docker.io/library/alpine:3.10:
	------
	------
	 > [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10:
	------
	failed to solve with frontend dockerfile.v0: failed to create LLB definition: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	Build [leeroy-web] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-app] failed: exit status 1. Docker build ran into internal error. Please retry.
	If this keeps happening, please open an issue..

                                                
                                                
** /stderr **
panic.go:642: *** TestSkaffold FAILED at 2022-03-29 18:55:24.5166129 +0000 GMT m=+6062.117517501
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect skaffold-20220329185334-1328
helpers_test.go:236: (dbg) docker inspect skaffold-20220329185334-1328:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4c38e4668d83858288cff5a8210e8a6c85022c1185d79f34361913ec68ff1b86",
	        "Created": "2022-03-29T18:54:25.1084666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 121480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T18:54:26.6257948Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/4c38e4668d83858288cff5a8210e8a6c85022c1185d79f34361913ec68ff1b86/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4c38e4668d83858288cff5a8210e8a6c85022c1185d79f34361913ec68ff1b86/hostname",
	        "HostsPath": "/var/lib/docker/containers/4c38e4668d83858288cff5a8210e8a6c85022c1185d79f34361913ec68ff1b86/hosts",
	        "LogPath": "/var/lib/docker/containers/4c38e4668d83858288cff5a8210e8a6c85022c1185d79f34361913ec68ff1b86/4c38e4668d83858288cff5a8210e8a6c85022c1185d79f34361913ec68ff1b86-json.log",
	        "Name": "/skaffold-20220329185334-1328",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-20220329185334-1328:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-20220329185334-1328",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3acd167aa90ad6de1a1189f7ab0b4a7936d99f9f2cbe83541cf96c7b04a043bc-init/diff:/var/lib/docker/overlay2/4eae5e38ad3553f9f0fde74ad732117b98cb0e1af550ecd7ce386997eede943f/diff:/var/lib/docker/overlay2/6789b74c71a0164bd481c99dc53318989abbcdc33b160f5d04f44aee12c80671/diff:/var/lib/docker/overlay2/91c6ac2f9a1035ebae76daccc83a3cafe5d26b2bd6b60ad54a6e29588a7003f8/diff:/var/lib/docker/overlay2/a916d7329da723d8397bfda8e20f2beb9156ceece20236242a811e43984bbfeb/diff:/var/lib/docker/overlay2/b046f566fd53b4f2f6d2c347c752b47f6c1a64316baeaa8c0fda825346ef7aba/diff:/var/lib/docker/overlay2/13a76ad56283b88db0508d09cc281c66801cee04cdbdd8f00827788d5231a025/diff:/var/lib/docker/overlay2/8e95b9ffc444e9f6b52db61f07f0a93bb3feb51b5d9dab6b7df487fef8d277f6/diff:/var/lib/docker/overlay2/bf807f6bedece6f8033221974e6b2ffdf94a6f9320d4f09337ed51b411f8f999/diff:/var/lib/docker/overlay2/d8184ca2707eba09a4f6bd90cad4795ce0f226f863f2d84723287ad76f1158d8/diff:/var/lib/docker/overlay2/390685
8e1746cab95814956b950325758e0765c0a6597b3d9062a4c36ab409be/diff:/var/lib/docker/overlay2/128db97cb7dee3d09e506aaaf97a45b5a647d8eb90782f5dd444aec15ff525da/diff:/var/lib/docker/overlay2/713bbf0f0ba84035f3a06b59c058ccfe9e7639f2ecb9d3db244e1adec7b6c46b/diff:/var/lib/docker/overlay2/6a820465cd423660c71cbb6741a47e4619efcf0010ac49bd49146501b9ac4925/diff:/var/lib/docker/overlay2/20c66385f330043e2c50b8193a59172de08776bbabdca289cb51c1b5f17e9b98/diff:/var/lib/docker/overlay2/7b2439fa81d8ff403bd5767752380391449aeba92453e1846fd36cfce9e6de61/diff:/var/lib/docker/overlay2/ee227ab74915b1419cfbc67f2b14b08cf564b4a38a39b157de2c65250a9172bf/diff:/var/lib/docker/overlay2/0b92e2531a28b01133cc2ab65802b03c04ef0213e850ac8558c9c4071fd018dd/diff:/var/lib/docker/overlay2/3de4968e9a773e45d79b096d23038e48758528adce69f14e7ff3a93bbd3192d7/diff:/var/lib/docker/overlay2/92eb87a3831ecebb34eb1e0ea7a71af9883f8426f35387845769f5fe75f04a52/diff:/var/lib/docker/overlay2/82a4c6fc3869bde23593a8490af76e406ad5a27ef1c30a38b481944390f7466e/diff:/var/lib/d
ocker/overlay2/6c957b5c04708287c2261d895a0f4563f25cc766eb21913c4ceb36f27a04914e/diff:/var/lib/docker/overlay2/21df3fb223398ef06fb62c4617e3487f0ac955e4f38ee3d2d72c9da488d436c7/diff:/var/lib/docker/overlay2/ddaf18203a4027208ea592b9716939849af0aa5d2cac57d2b0c36382e078f483/diff:/var/lib/docker/overlay2/9a82b4c496462c1bf59ccb096f886e61674d92540023b7fed618682584358cbf/diff:/var/lib/docker/overlay2/62a8d9c5758a93af517541ab9d841f9415f55ca5503844371b7e35d47838dbb0/diff:/var/lib/docker/overlay2/c17d3885b54e341402c392175e2ab4ff1ab038acafe82a8090b1725613597f95/diff:/var/lib/docker/overlay2/d1401e4d6e04dded3c7d0335e32d0eb6cf2d7c19d21da53b836d591dddac8961/diff:/var/lib/docker/overlay2/7c4934c7f4f9cce1a35b340eebbc473f9bb33153f61f1c0454bffd0b2ae5a37e/diff:/var/lib/docker/overlay2/02d6bd07f6dbb7198d2c42fe26ff2efbabb9a889dfa0b79fd05e06a021bc81b4/diff:/var/lib/docker/overlay2/137f83b86485992317df9126e714cd331df51131ac4990d1040cf54cace6506e/diff:/var/lib/docker/overlay2/75d1117a1f5f001df3981193d1251ab8426eb4c100c9c1bbb946f0c2e0e
1d73c/diff:/var/lib/docker/overlay2/b20542be533b230be3dee06af0364759a81f26397d9371a7052efdac48fc1a3e/diff:/var/lib/docker/overlay2/b6103a89043f339bfc18a195b11f4a57f6042806725aac9d6b8db0e2af4fe01e/diff:/var/lib/docker/overlay2/69041f5eef389b325dd43fa81731c884299e2cb880a57ba904b8752c12446236/diff:/var/lib/docker/overlay2/8bc9de0232e5ba86f129e746c52a7f53836827a1a9cfc8e0c731d81af17b92a4/diff:/var/lib/docker/overlay2/5494bafa4607149ff46b2ed95fd9c86139339508d3c27bf32346963a41ae95f1/diff:/var/lib/docker/overlay2/daaadc749b2e3fb99bb23ec4d0a908e70deef3f9caff12f7b3fa29a57086e13a/diff:/var/lib/docker/overlay2/35b939c7fd0daf3717995c2aff595f96a741b48ae2da6b523aeda782ea3922e9/diff:/var/lib/docker/overlay2/b5a01cc1c410e803d28949ef6f35b55ac04473d89beb188d9d4866287b7cbbee/diff:/var/lib/docker/overlay2/c26c0af38634a15c6619c42bd2e5ec804bab550ff8078c084ba220030d8f4b93/diff:/var/lib/docker/overlay2/c12adb9eba87b6903ac0b2e16234b6a4f11a66d10d30d5379b19963433b76506/diff:/var/lib/docker/overlay2/537ea8129185a2faaaafa08ee553e15fe2cee0
4e80dab99066f779573324b53c/diff:/var/lib/docker/overlay2/ba74848f80f8d422a61241b3778f2395a32e73958e6a6dfddf5724bd0367dc67/diff:/var/lib/docker/overlay2/be8013e1c023e08543e181408137e02941d2b05181428b80bf154108c0cf48a5/diff:/var/lib/docker/overlay2/895568f040b89c0f90e7f4e41a1a77ca025acd0a0e0682a242f830a2e9c4ede7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3acd167aa90ad6de1a1189f7ab0b4a7936d99f9f2cbe83541cf96c7b04a043bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3acd167aa90ad6de1a1189f7ab0b4a7936d99f9f2cbe83541cf96c7b04a043bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3acd167aa90ad6de1a1189f7ab0b4a7936d99f9f2cbe83541cf96c7b04a043bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-20220329185334-1328",
	                "Source": "/var/lib/docker/volumes/skaffold-20220329185334-1328/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-20220329185334-1328",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-20220329185334-1328",
	                "name.minikube.sigs.k8s.io": "skaffold-20220329185334-1328",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c5c1ab2f6f70fe6a521bf17898a4cd00fa0f7bf1d75023e36f8eb7ad9f642336",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56237"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56238"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56239"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56240"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56236"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c5c1ab2f6f70",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-20220329185334-1328": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4c38e4668d83",
	                        "skaffold-20220329185334-1328"
	                    ],
	                    "NetworkID": "c83454fcec20e8238c17c35d389b6bfc2e920952b118955724127b60adf8d370",
	                    "EndpointID": "6f09cb393da4d9d57826a96c3c713dec48e91f68b3504ecaac999190dcdfe658",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20220329185334-1328 -n skaffold-20220329185334-1328
helpers_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20220329185334-1328 -n skaffold-20220329185334-1328: (4.1646186s)
helpers_test.go:245: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p skaffold-20220329185334-1328 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p skaffold-20220329185334-1328 logs -n 25: (5.2519613s)
helpers_test.go:253: TestSkaffold logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	|  Command   |                                                              Args                                                              |              Profile               |       User        | Version |          Start Time           |           End Time            |
	|------------|--------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	| -p         | multinode-20220329182619-1328 cp multinode-20220329182619-1328-m03:/home/docker/cp-test.txt                                    | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:34:24 GMT | Tue, 29 Mar 2022 18:34:29 GMT |
	|            | multinode-20220329182619-1328-m02:/home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328-m02.txt |                                    |                   |         |                               |                               |
	| -p         | multinode-20220329182619-1328                                                                                                  | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:34:29 GMT | Tue, 29 Mar 2022 18:34:33 GMT |
	|            | ssh -n                                                                                                                         |                                    |                   |         |                               |                               |
	|            | multinode-20220329182619-1328-m03                                                                                              |                                    |                   |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                                                                                              |                                    |                   |         |                               |                               |
	| -p         | multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 sudo cat                                                | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:34:33 GMT | Tue, 29 Mar 2022 18:34:37 GMT |
	|            | /home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328-m02.txt                                   |                                    |                   |         |                               |                               |
	| -p         | multinode-20220329182619-1328                                                                                                  | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:34:37 GMT | Tue, 29 Mar 2022 18:34:42 GMT |
	|            | node stop m03                                                                                                                  |                                    |                   |         |                               |                               |
	| -p         | multinode-20220329182619-1328                                                                                                  | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:34:56 GMT | Tue, 29 Mar 2022 18:35:30 GMT |
	|            | node start m03                                                                                                                 |                                    |                   |         |                               |                               |
	|            | --alsologtostderr                                                                                                              |                                    |                   |         |                               |                               |
	| stop       | -p                                                                                                                             | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:35:38 GMT | Tue, 29 Mar 2022 18:36:10 GMT |
	|            | multinode-20220329182619-1328                                                                                                  |                                    |                   |         |                               |                               |
	| start      | -p                                                                                                                             | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:36:11 GMT | Tue, 29 Mar 2022 18:39:08 GMT |
	|            | multinode-20220329182619-1328                                                                                                  |                                    |                   |         |                               |                               |
	|            | --wait=true -v=8                                                                                                               |                                    |                   |         |                               |                               |
	|            | --alsologtostderr                                                                                                              |                                    |                   |         |                               |                               |
	| -p         | multinode-20220329182619-1328                                                                                                  | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:39:08 GMT | Tue, 29 Mar 2022 18:39:27 GMT |
	|            | node delete m03                                                                                                                |                                    |                   |         |                               |                               |
	| -p         | multinode-20220329182619-1328                                                                                                  | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:39:34 GMT | Tue, 29 Mar 2022 18:40:03 GMT |
	|            | stop                                                                                                                           |                                    |                   |         |                               |                               |
	| start      | -p                                                                                                                             | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:40:09 GMT | Tue, 29 Mar 2022 18:42:36 GMT |
	|            | multinode-20220329182619-1328                                                                                                  |                                    |                   |         |                               |                               |
	|            | --wait=true -v=8                                                                                                               |                                    |                   |         |                               |                               |
	|            | --alsologtostderr                                                                                                              |                                    |                   |         |                               |                               |
	|            | --driver=docker                                                                                                                |                                    |                   |         |                               |                               |
	| start      | -p                                                                                                                             | multinode-20220329182619-1328-m03  | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:42:44 GMT | Tue, 29 Mar 2022 18:44:26 GMT |
	|            | multinode-20220329182619-1328-m03                                                                                              |                                    |                   |         |                               |                               |
	|            | --driver=docker                                                                                                                |                                    |                   |         |                               |                               |
	| delete     | -p                                                                                                                             | multinode-20220329182619-1328-m03  | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:44:30 GMT | Tue, 29 Mar 2022 18:44:47 GMT |
	|            | multinode-20220329182619-1328-m03                                                                                              |                                    |                   |         |                               |                               |
	| delete     | -p                                                                                                                             | multinode-20220329182619-1328      | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:44:47 GMT | Tue, 29 Mar 2022 18:45:12 GMT |
	|            | multinode-20220329182619-1328                                                                                                  |                                    |                   |         |                               |                               |
	| start      | -p                                                                                                                             | test-preload-20220329184512-1328   | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:45:12 GMT | Tue, 29 Mar 2022 18:47:49 GMT |
	|            | test-preload-20220329184512-1328                                                                                               |                                    |                   |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                                                                                                |                                    |                   |         |                               |                               |
	|            | --wait=true --preload=false                                                                                                    |                                    |                   |         |                               |                               |
	|            | --driver=docker                                                                                                                |                                    |                   |         |                               |                               |
	|            | --kubernetes-version=v1.17.0                                                                                                   |                                    |                   |         |                               |                               |
	| ssh        | -p                                                                                                                             | test-preload-20220329184512-1328   | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:47:49 GMT | Tue, 29 Mar 2022 18:47:54 GMT |
	|            | test-preload-20220329184512-1328                                                                                               |                                    |                   |         |                               |                               |
	|            | -- docker pull                                                                                                                 |                                    |                   |         |                               |                               |
	|            | gcr.io/k8s-minikube/busybox                                                                                                    |                                    |                   |         |                               |                               |
	| start      | -p                                                                                                                             | test-preload-20220329184512-1328   | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:47:55 GMT | Tue, 29 Mar 2022 18:50:03 GMT |
	|            | test-preload-20220329184512-1328                                                                                               |                                    |                   |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                                                                                                |                                    |                   |         |                               |                               |
	|            | -v=1 --wait=true --driver=docker                                                                                               |                                    |                   |         |                               |                               |
	|            | --kubernetes-version=v1.17.3                                                                                                   |                                    |                   |         |                               |                               |
	| ssh        | -p                                                                                                                             | test-preload-20220329184512-1328   | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:50:04 GMT | Tue, 29 Mar 2022 18:50:07 GMT |
	|            | test-preload-20220329184512-1328                                                                                               |                                    |                   |         |                               |                               |
	|            | -- docker images                                                                                                               |                                    |                   |         |                               |                               |
	| delete     | -p                                                                                                                             | test-preload-20220329184512-1328   | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:50:08 GMT | Tue, 29 Mar 2022 18:50:19 GMT |
	|            | test-preload-20220329184512-1328                                                                                               |                                    |                   |         |                               |                               |
	| start      | -p                                                                                                                             | scheduled-stop-20220329185019-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:50:19 GMT | Tue, 29 Mar 2022 18:52:01 GMT |
	|            | scheduled-stop-20220329185019-1328                                                                                             |                                    |                   |         |                               |                               |
	|            | --memory=2048 --driver=docker                                                                                                  |                                    |                   |         |                               |                               |
	| stop       | -p                                                                                                                             | scheduled-stop-20220329185019-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:52:01 GMT | Tue, 29 Mar 2022 18:52:05 GMT |
	|            | scheduled-stop-20220329185019-1328                                                                                             |                                    |                   |         |                               |                               |
	|            | --schedule 5m                                                                                                                  |                                    |                   |         |                               |                               |
	| ssh        | -p                                                                                                                             | scheduled-stop-20220329185019-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:52:10 GMT | Tue, 29 Mar 2022 18:52:14 GMT |
	|            | scheduled-stop-20220329185019-1328                                                                                             |                                    |                   |         |                               |                               |
	|            | -- sudo systemctl show                                                                                                         |                                    |                   |         |                               |                               |
	|            | minikube-scheduled-stop --no-page                                                                                              |                                    |                   |         |                               |                               |
	| stop       | -p                                                                                                                             | scheduled-stop-20220329185019-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:52:14 GMT | Tue, 29 Mar 2022 18:52:17 GMT |
	|            | scheduled-stop-20220329185019-1328                                                                                             |                                    |                   |         |                               |                               |
	|            | --schedule 5s                                                                                                                  |                                    |                   |         |                               |                               |
	| delete     | -p                                                                                                                             | scheduled-stop-20220329185019-1328 | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:53:22 GMT | Tue, 29 Mar 2022 18:53:34 GMT |
	|            | scheduled-stop-20220329185019-1328                                                                                             |                                    |                   |         |                               |                               |
	| start      | -p                                                                                                                             | skaffold-20220329185334-1328       | minikube8\jenkins | v1.25.2 | Tue, 29 Mar 2022 18:53:35 GMT | Tue, 29 Mar 2022 18:55:12 GMT |
	|            | skaffold-20220329185334-1328                                                                                                   |                                    |                   |         |                               |                               |
	|            | --memory=2600 --driver=docker                                                                                                  |                                    |                   |         |                               |                               |
	| docker-env | --shell none -p                                                                                                                | skaffold-20220329185334-1328       | skaffold          | v1.25.2 | Tue, 29 Mar 2022 18:55:14 GMT | Tue, 29 Mar 2022 18:55:19 GMT |
	|            | skaffold-20220329185334-1328                                                                                                   |                                    |                   |         |                               |                               |
	|            | --user=skaffold                                                                                                                |                                    |                   |         |                               |                               |
	|------------|--------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 18:53:35
	Running on machine: minikube8
	Binary: Built with gc go1.17.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 18:53:35.898652    8960 out.go:297] Setting OutFile to fd 1420 ...
	I0329 18:53:35.962517    8960 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:53:35.962517    8960 out.go:310] Setting ErrFile to fd 1424...
	I0329 18:53:35.962517    8960 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:53:35.983881    8960 out.go:304] Setting JSON to false
	I0329 18:53:35.988500    8960 start.go:114] hostinfo: {"hostname":"minikube8","uptime":7212,"bootTime":1648572803,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 18:53:35.988500    8960 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 18:53:35.993500    8960 out.go:176] * [skaffold-20220329185334-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 18:53:35.994287    8960 notify.go:193] Checking for updates...
	I0329 18:53:36.007090    8960 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 18:53:36.014333    8960 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 18:53:36.017821    8960 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 18:53:36.020423    8960 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 18:53:36.021433    8960 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 18:53:37.931537    8960 docker.go:137] docker version: linux-20.10.13
	I0329 18:53:37.939510    8960 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 18:53:38.638272    8960 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-29 18:53:38.2877435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 18:53:38.648699    8960 out.go:176] * Using the docker driver based on user configuration
	I0329 18:53:38.648699    8960 start.go:283] selected driver: docker
	I0329 18:53:38.648699    8960 start.go:800] validating driver "docker" against <nil>
	I0329 18:53:38.648699    8960 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 18:53:38.771204    8960 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 18:53:39.466244    8960 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-29 18:53:39.1055701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 18:53:39.466539    8960 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 18:53:39.467340    8960 start_flags.go:819] Wait components to verify : map[apiserver:true system_pods:true]
	I0329 18:53:39.467340    8960 cni.go:93] Creating CNI manager for ""
	I0329 18:53:39.467340    8960 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 18:53:39.467340    8960 start_flags.go:306] config:
	{Name:skaffold-20220329185334-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220329185334-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 18:53:39.475678    8960 out.go:176] * Starting control plane node skaffold-20220329185334-1328 in cluster skaffold-20220329185334-1328
	I0329 18:53:39.475678    8960 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 18:53:39.480909    8960 out.go:176] * Pulling base image ...
	I0329 18:53:39.480909    8960 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:53:39.481032    8960 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 18:53:39.481118    8960 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 18:53:39.481118    8960 cache.go:57] Caching tarball of preloaded images
	I0329 18:53:39.481653    8960 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 18:53:39.481797    8960 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 18:53:39.482393    8960 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\config.json ...
	I0329 18:53:39.482605    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\config.json: {Name:mk99859e6ef13587325d66a71dda5437f22d62ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:53:39.945510    8960 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 18:53:39.945510    8960 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 18:53:39.945510    8960 cache.go:208] Successfully downloaded all kic artifacts
	I0329 18:53:39.945510    8960 start.go:348] acquiring machines lock for skaffold-20220329185334-1328: {Name:mkc9381e0b92bfcf4c2f9e1cb0ab525df9399b3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 18:53:39.945510    8960 start.go:352] acquired machines lock for "skaffold-20220329185334-1328" in 0s
	I0329 18:53:39.945510    8960 start.go:90] Provisioning new machine with config: &{Name:skaffold-20220329185334-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220329185334-1328 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5
ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 18:53:39.945510    8960 start.go:127] createHost starting for "" (driver="docker")
	I0329 18:53:39.955511    8960 out.go:203] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0329 18:53:39.955511    8960 start.go:161] libmachine.API.Create for "skaffold-20220329185334-1328" (driver="docker")
	I0329 18:53:39.956518    8960 client.go:168] LocalClient.Create starting
	I0329 18:53:39.956518    8960 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0329 18:53:39.956518    8960 main.go:130] libmachine: Decoding PEM data...
	I0329 18:53:39.956518    8960 main.go:130] libmachine: Parsing certificate...
	I0329 18:53:39.956518    8960 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0329 18:53:39.956518    8960 main.go:130] libmachine: Decoding PEM data...
	I0329 18:53:39.957495    8960 main.go:130] libmachine: Parsing certificate...
	I0329 18:53:39.966542    8960 cli_runner.go:133] Run: docker network inspect skaffold-20220329185334-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 18:53:40.412028    8960 cli_runner.go:180] docker network inspect skaffold-20220329185334-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 18:53:40.420936    8960 network_create.go:262] running [docker network inspect skaffold-20220329185334-1328] to gather additional debugging logs...
	I0329 18:53:40.420936    8960 cli_runner.go:133] Run: docker network inspect skaffold-20220329185334-1328
	W0329 18:53:40.889804    8960 cli_runner.go:180] docker network inspect skaffold-20220329185334-1328 returned with exit code 1
	I0329 18:53:40.889834    8960 network_create.go:265] error running [docker network inspect skaffold-20220329185334-1328]: docker network inspect skaffold-20220329185334-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: skaffold-20220329185334-1328
	I0329 18:53:40.889834    8960 network_create.go:267] output of [docker network inspect skaffold-20220329185334-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: skaffold-20220329185334-1328
	
	** /stderr **
	I0329 18:53:40.898428    8960 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 18:53:41.383129    8960 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006168] misses:0}
	I0329 18:53:41.383229    8960 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 18:53:41.383229    8960 network_create.go:114] attempt to create docker network skaffold-20220329185334-1328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 18:53:41.390180    8960 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220329185334-1328
	I0329 18:53:41.961848    8960 network_create.go:98] docker network skaffold-20220329185334-1328 192.168.49.0/24 created
	I0329 18:53:41.961931    8960 kic.go:106] calculated static IP "192.168.49.2" for the "skaffold-20220329185334-1328" container
	I0329 18:53:41.975796    8960 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 18:53:42.416428    8960 cli_runner.go:133] Run: docker volume create skaffold-20220329185334-1328 --label name.minikube.sigs.k8s.io=skaffold-20220329185334-1328 --label created_by.minikube.sigs.k8s.io=true
	I0329 18:53:42.873381    8960 oci.go:102] Successfully created a docker volume skaffold-20220329185334-1328
	I0329 18:53:42.880988    8960 cli_runner.go:133] Run: docker run --rm --name skaffold-20220329185334-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220329185334-1328 --entrypoint /usr/bin/test -v skaffold-20220329185334-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 18:53:45.414549    8960 cli_runner.go:186] Completed: docker run --rm --name skaffold-20220329185334-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220329185334-1328 --entrypoint /usr/bin/test -v skaffold-20220329185334-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (2.5335455s)
	I0329 18:53:45.414549    8960 oci.go:106] Successfully prepared a docker volume skaffold-20220329185334-1328
	I0329 18:53:45.414549    8960 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:53:45.414549    8960 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 18:53:45.427491    8960 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20220329185334-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 18:54:23.224169    8960 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20220329185334-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (37.7964599s)
	I0329 18:54:23.224169    8960 kic.go:188] duration metric: took 37.809402 seconds to extract preloaded images to volume
	I0329 18:54:23.231887    8960 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 18:54:23.930574    8960 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-29 18:54:23.5680724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 18:54:23.946625    8960 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 18:54:24.665192    8960 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20220329185334-1328 --name skaffold-20220329185334-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220329185334-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20220329185334-1328 --network skaffold-20220329185334-1328 --ip 192.168.49.2 --volume skaffold-20220329185334-1328:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 18:54:26.779454    8960 cli_runner.go:186] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20220329185334-1328 --name skaffold-20220329185334-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220329185334-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20220329185334-1328 --network skaffold-20220329185334-1328 --ip 192.168.49.2 --volume skaffold-20220329185334-1328:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: (2.1140921s)
	I0329 18:54:26.788665    8960 cli_runner.go:133] Run: docker container inspect skaffold-20220329185334-1328 --format={{.State.Running}}
	I0329 18:54:27.291810    8960 cli_runner.go:133] Run: docker container inspect skaffold-20220329185334-1328 --format={{.State.Status}}
	I0329 18:54:27.748674    8960 cli_runner.go:133] Run: docker exec skaffold-20220329185334-1328 stat /var/lib/dpkg/alternatives/iptables
	I0329 18:54:28.600317    8960 oci.go:278] the created container "skaffold-20220329185334-1328" has a running status.
	I0329 18:54:28.600317    8960 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa...
	I0329 18:54:28.870997    8960 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 18:54:29.471510    8960 cli_runner.go:133] Run: docker container inspect skaffold-20220329185334-1328 --format={{.State.Status}}
	I0329 18:54:29.950705    8960 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 18:54:29.950705    8960 kic_runner.go:114] Args: [docker exec --privileged skaffold-20220329185334-1328 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 18:54:30.810709    8960 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa...
	I0329 18:54:31.311266    8960 cli_runner.go:133] Run: docker container inspect skaffold-20220329185334-1328 --format={{.State.Status}}
	I0329 18:54:31.793991    8960 machine.go:88] provisioning docker machine ...
	I0329 18:54:31.794074    8960 ubuntu.go:169] provisioning hostname "skaffold-20220329185334-1328"
	I0329 18:54:31.801216    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:32.293142    8960 main.go:130] libmachine: Using SSH client type: native
	I0329 18:54:32.293249    8960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 56237 <nil> <nil>}
	I0329 18:54:32.293249    8960 main.go:130] libmachine: About to run SSH command:
	sudo hostname skaffold-20220329185334-1328 && echo "skaffold-20220329185334-1328" | sudo tee /etc/hostname
	I0329 18:54:32.487891    8960 main.go:130] libmachine: SSH cmd err, output: <nil>: skaffold-20220329185334-1328
	
	I0329 18:54:32.497783    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:32.973565    8960 main.go:130] libmachine: Using SSH client type: native
	I0329 18:54:32.974139    8960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 56237 <nil> <nil>}
	I0329 18:54:32.974167    8960 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-20220329185334-1328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-20220329185334-1328/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-20220329185334-1328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 18:54:33.177178    8960 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 18:54:33.177178    8960 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0329 18:54:33.177178    8960 ubuntu.go:177] setting up certificates
	I0329 18:54:33.177178    8960 provision.go:83] configureAuth start
	I0329 18:54:33.184966    8960 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220329185334-1328
	I0329 18:54:33.626408    8960 provision.go:138] copyHostCerts
	I0329 18:54:33.626408    8960 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0329 18:54:33.626408    8960 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0329 18:54:33.626408    8960 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0329 18:54:33.628417    8960 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0329 18:54:33.628504    8960 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0329 18:54:33.628504    8960 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0329 18:54:33.629689    8960 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0329 18:54:33.629689    8960 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0329 18:54:33.630293    8960 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0329 18:54:33.630994    8960 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.skaffold-20220329185334-1328 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-20220329185334-1328]
	I0329 18:54:33.737841    8960 provision.go:172] copyRemoteCerts
	I0329 18:54:33.747840    8960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 18:54:33.753832    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:34.224089    8960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56237 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa Username:docker}
	I0329 18:54:34.369077    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0329 18:54:34.427897    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0329 18:54:34.482335    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 18:54:34.533422    8960 provision.go:86] duration metric: configureAuth took 1.3557087s
	I0329 18:54:34.533422    8960 ubuntu.go:193] setting minikube options for container-runtime
	I0329 18:54:34.533987    8960 config.go:176] Loaded profile config "skaffold-20220329185334-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:54:34.541958    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:35.032205    8960 main.go:130] libmachine: Using SSH client type: native
	I0329 18:54:35.032866    8960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 56237 <nil> <nil>}
	I0329 18:54:35.032866    8960 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 18:54:35.247256    8960 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 18:54:35.247256    8960 ubuntu.go:71] root file system type: overlay
	I0329 18:54:35.247256    8960 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 18:54:35.256010    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:35.730271    8960 main.go:130] libmachine: Using SSH client type: native
	I0329 18:54:35.730844    8960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 56237 <nil> <nil>}
	I0329 18:54:35.730844    8960 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 18:54:35.986669    8960 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 18:54:35.994426    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:36.459975    8960 main.go:130] libmachine: Using SSH client type: native
	I0329 18:54:36.459975    8960 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 56237 <nil> <nil>}
	I0329 18:54:36.459975    8960 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 18:54:37.840724    8960 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 18:54:35.942870100 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 18:54:37.840724    8960 machine.go:91] provisioned docker machine in 6.0466986s
	I0329 18:54:37.840724    8960 client.go:171] LocalClient.Create took 57.883872s
	I0329 18:54:37.840724    8960 start.go:169] duration metric: libmachine.API.Create for "skaffold-20220329185334-1328" took 57.8848799s
	I0329 18:54:37.840724    8960 start.go:302] post-start starting for "skaffold-20220329185334-1328" (driver="docker")
	I0329 18:54:37.840724    8960 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 18:54:37.852103    8960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 18:54:37.859603    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:38.312850    8960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56237 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa Username:docker}
	I0329 18:54:38.441076    8960 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 18:54:38.463690    8960 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 18:54:38.463723    8960 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 18:54:38.463723    8960 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 18:54:38.463723    8960 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 18:54:38.463723    8960 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0329 18:54:38.464199    8960 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0329 18:54:38.465108    8960 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem -> 13282.pem in /etc/ssl/certs
	I0329 18:54:38.476261    8960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 18:54:38.507175    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /etc/ssl/certs/13282.pem (1708 bytes)
	I0329 18:54:38.567372    8960 start.go:305] post-start completed in 726.6436ms
	I0329 18:54:38.578913    8960 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220329185334-1328
	I0329 18:54:39.054775    8960 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\config.json ...
	I0329 18:54:39.067293    8960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 18:54:39.075033    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:39.580005    8960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56237 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa Username:docker}
	I0329 18:54:39.739473    8960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 18:54:39.755670    8960 start.go:130] duration metric: createHost completed in 59.8098151s
	I0329 18:54:39.755670    8960 start.go:81] releasing machines lock for "skaffold-20220329185334-1328", held for 59.8098151s
	I0329 18:54:39.764501    8960 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220329185334-1328
	I0329 18:54:40.266213    8960 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 18:54:40.274460    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:40.275599    8960 ssh_runner.go:195] Run: systemctl --version
	I0329 18:54:40.281698    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:40.759706    8960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56237 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa Username:docker}
	I0329 18:54:40.805922    8960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56237 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa Username:docker}
	I0329 18:54:41.029642    8960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 18:54:41.070466    8960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 18:54:41.101861    8960 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 18:54:41.111041    8960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 18:54:41.143934    8960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 18:54:41.200961    8960 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 18:54:41.387668    8960 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 18:54:41.548743    8960 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 18:54:41.596687    8960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 18:54:41.764286    8960 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 18:54:41.804383    8960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 18:54:41.914086    8960 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 18:54:42.016687    8960 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 18:54:42.016687    8960 cli_runner.go:133] Run: docker exec -t skaffold-20220329185334-1328 dig +short host.docker.internal
	I0329 18:54:42.867450    8960 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0329 18:54:42.876449    8960 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0329 18:54:42.891470    8960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 18:54:42.923353    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:54:43.382211    8960 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 18:54:43.390019    8960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 18:54:43.473158    8960 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 18:54:43.473158    8960 docker.go:537] Images already preloaded, skipping extraction
	I0329 18:54:43.481651    8960 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 18:54:43.553878    8960 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 18:54:43.553958    8960 cache_images.go:84] Images are preloaded, skipping loading
	I0329 18:54:43.562200    8960 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 18:54:43.750047    8960 cni.go:93] Creating CNI manager for ""
	I0329 18:54:43.750047    8960 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 18:54:43.750047    8960 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 18:54:43.750047    8960 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-20220329185334-1328 NodeName:skaffold-20220329185334-1328 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 18:54:43.750724    8960 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "skaffold-20220329185334-1328"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 18:54:43.750724    8960 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=skaffold-20220329185334-1328 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220329185334-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0329 18:54:43.761756    8960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 18:54:43.789750    8960 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 18:54:43.800672    8960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 18:54:43.823793    8960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0329 18:54:43.866642    8960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 18:54:43.909520    8960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0329 18:54:43.962352    8960 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 18:54:43.977391    8960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 18:54:44.004419    8960 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328 for IP: 192.168.49.2
	I0329 18:54:44.005422    8960 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0329 18:54:44.005726    8960 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0329 18:54:44.006199    8960 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\client.key
	I0329 18:54:44.006351    8960 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\client.crt with IP's: []
	I0329 18:54:44.270101    8960 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\client.crt ...
	I0329 18:54:44.270101    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\client.crt: {Name:mkd395a059d424ffce814911a4f15b238a105bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:54:44.271100    8960 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\client.key ...
	I0329 18:54:44.271100    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\client.key: {Name:mk60a56501665d5ef1b3aac7226e9e95b8b4d8d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:54:44.272058    8960 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.key.dd3b5fb2
	I0329 18:54:44.272058    8960 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 18:54:44.421841    8960 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.crt.dd3b5fb2 ...
	I0329 18:54:44.421841    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.crt.dd3b5fb2: {Name:mk14cc259514907fd96f62fa07ff9ee663c29636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:54:44.423783    8960 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.key.dd3b5fb2 ...
	I0329 18:54:44.423783    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.key.dd3b5fb2: {Name:mk402a6cef7e4b30d267e506a7861c9b432b3f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:54:44.423783    8960 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.crt
	I0329 18:54:44.437829    8960 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.key
	I0329 18:54:44.438829    8960 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.key
	I0329 18:54:44.438829    8960 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.crt with IP's: []
	I0329 18:54:44.534805    8960 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.crt ...
	I0329 18:54:44.534805    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.crt: {Name:mk242412c5525650b2b89cf25301e6ab8146828f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:54:44.535797    8960 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.key ...
	I0329 18:54:44.535797    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.key: {Name:mk7fa50542e2a9ed18d3da889beae89b3841cc98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:54:44.542807    8960 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem (1338 bytes)
	W0329 18:54:44.543802    8960 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328_empty.pem, impossibly tiny 0 bytes
	I0329 18:54:44.543802    8960 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0329 18:54:44.543802    8960 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0329 18:54:44.543802    8960 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0329 18:54:44.543802    8960 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0329 18:54:44.544872    8960 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem (1708 bytes)
	I0329 18:54:44.546834    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 18:54:44.603787    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 18:54:44.659727    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 18:54:44.711881    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\skaffold-20220329185334-1328\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 18:54:44.768206    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 18:54:44.826291    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0329 18:54:44.887422    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 18:54:44.942688    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0329 18:54:44.999286    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 18:54:45.059636    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem --> /usr/share/ca-certificates/1328.pem (1338 bytes)
	I0329 18:54:45.110154    8960 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /usr/share/ca-certificates/13282.pem (1708 bytes)
	I0329 18:54:45.161946    8960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 18:54:45.218082    8960 ssh_runner.go:195] Run: openssl version
	I0329 18:54:45.248569    8960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 18:54:45.279135    8960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:54:45.290143    8960 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:18 /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:54:45.299131    8960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 18:54:45.324247    8960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 18:54:45.366557    8960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1328.pem && ln -fs /usr/share/ca-certificates/1328.pem /etc/ssl/certs/1328.pem"
	I0329 18:54:45.403359    8960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1328.pem
	I0329 18:54:45.415708    8960 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:29 /usr/share/ca-certificates/1328.pem
	I0329 18:54:45.424594    8960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1328.pem
	I0329 18:54:45.457593    8960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1328.pem /etc/ssl/certs/51391683.0"
	I0329 18:54:45.496592    8960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13282.pem && ln -fs /usr/share/ca-certificates/13282.pem /etc/ssl/certs/13282.pem"
	I0329 18:54:45.533786    8960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13282.pem
	I0329 18:54:45.546023    8960 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:29 /usr/share/ca-certificates/13282.pem
	I0329 18:54:45.556903    8960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13282.pem
	I0329 18:54:45.583793    8960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13282.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 18:54:45.607374    8960 kubeadm.go:391] StartCluster: {Name:skaffold-20220329185334-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220329185334-1328 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 18:54:45.616084    8960 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 18:54:45.698935    8960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 18:54:45.732877    8960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 18:54:45.760074    8960 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 18:54:45.770564    8960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 18:54:45.799289    8960 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 18:54:45.799289    8960 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 18:55:06.487999    8960 out.go:203]   - Generating certificates and keys ...
	I0329 18:55:06.494014    8960 out.go:203]   - Booting up control plane ...
	I0329 18:55:06.499164    8960 out.go:203]   - Configuring RBAC rules ...
	I0329 18:55:06.502998    8960 cni.go:93] Creating CNI manager for ""
	I0329 18:55:06.502998    8960 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 18:55:06.502998    8960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 18:55:06.517470    8960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:55:06.519466    8960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=skaffold-20220329185334-1328 minikube.k8s.io/updated_at=2022_03_29T18_55_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 18:55:06.573109    8960 ops.go:34] apiserver oom_adj: -16
	I0329 18:55:08.497601    8960 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=skaffold-20220329185334-1328 minikube.k8s.io/updated_at=2022_03_29T18_55_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.9781233s)
	I0329 18:55:08.498137    8960 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.9801202s)
	I0329 18:55:08.498137    8960 kubeadm.go:1020] duration metric: took 1.9951278s to wait for elevateKubeSystemPrivileges.
	I0329 18:55:08.498137    8960 kubeadm.go:393] StartCluster complete in 22.8906314s
	I0329 18:55:08.498222    8960 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:55:08.498282    8960 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 18:55:08.499648    8960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 18:55:09.086228    8960 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "skaffold-20220329185334-1328" rescaled to 1
	I0329 18:55:09.086317    8960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 18:55:09.086317    8960 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 18:55:09.088668    8960 out.go:176] * Verifying Kubernetes components...
	I0329 18:55:09.086317    8960 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 18:55:09.086317    8960 config.go:176] Loaded profile config "skaffold-20220329185334-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:55:09.088871    8960 addons.go:65] Setting storage-provisioner=true in profile "skaffold-20220329185334-1328"
	I0329 18:55:09.088971    8960 addons.go:153] Setting addon storage-provisioner=true in "skaffold-20220329185334-1328"
	W0329 18:55:09.088971    8960 addons.go:165] addon storage-provisioner should already be in state true
	I0329 18:55:09.088971    8960 addons.go:65] Setting default-storageclass=true in profile "skaffold-20220329185334-1328"
	I0329 18:55:09.089011    8960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-20220329185334-1328"
	I0329 18:55:09.089011    8960 host.go:66] Checking if "skaffold-20220329185334-1328" exists ...
	I0329 18:55:09.102512    8960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 18:55:09.114092    8960 cli_runner.go:133] Run: docker container inspect skaffold-20220329185334-1328 --format={{.State.Status}}
	I0329 18:55:09.119923    8960 cli_runner.go:133] Run: docker container inspect skaffold-20220329185334-1328 --format={{.State.Status}}
	I0329 18:55:09.238279    8960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 18:55:09.252293    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:55:09.651305    8960 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 18:55:09.651305    8960 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 18:55:09.651305    8960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 18:55:09.659305    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:55:09.690298    8960 addons.go:153] Setting addon default-storageclass=true in "skaffold-20220329185334-1328"
	W0329 18:55:09.690298    8960 addons.go:165] addon default-storageclass should already be in state true
	I0329 18:55:09.690298    8960 host.go:66] Checking if "skaffold-20220329185334-1328" exists ...
	I0329 18:55:09.712293    8960 cli_runner.go:133] Run: docker container inspect skaffold-20220329185334-1328 --format={{.State.Status}}
	I0329 18:55:09.760296    8960 api_server.go:51] waiting for apiserver process to appear ...
	I0329 18:55:09.770295    8960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 18:55:10.125097    8960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56237 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa Username:docker}
	I0329 18:55:10.182927    8960 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 18:55:10.182927    8960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 18:55:10.189928    8960 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220329185334-1328
	I0329 18:55:10.364205    8960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 18:55:10.703806    8960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56237 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\skaffold-20220329185334-1328\id_rsa Username:docker}
	I0329 18:55:11.024466    8960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 18:55:11.275881    8960 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.0375902s)
	I0329 18:55:11.275881    8960 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0329 18:55:11.275881    8960 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.5055768s)
	I0329 18:55:11.275881    8960 api_server.go:71] duration metric: took 2.1895509s to wait for apiserver process to appear ...
	I0329 18:55:11.275881    8960 api_server.go:87] waiting for apiserver healthz status ...
	I0329 18:55:11.275881    8960 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56236/healthz ...
	I0329 18:55:11.377352    8960 api_server.go:266] https://127.0.0.1:56236/healthz returned 200:
	ok
	I0329 18:55:11.384279    8960 api_server.go:140] control plane version: v1.23.5
	I0329 18:55:11.384326    8960 api_server.go:130] duration metric: took 108.4265ms to wait for apiserver health ...
	I0329 18:55:11.384326    8960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0329 18:55:11.470267    8960 system_pods.go:59] 4 kube-system pods found
	I0329 18:55:11.470267    8960 system_pods.go:61] "etcd-skaffold-20220329185334-1328" [cf363efd-d403-4b21-a971-a9cfa0949db3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0329 18:55:11.470267    8960 system_pods.go:61] "kube-apiserver-skaffold-20220329185334-1328" [1088aeaf-104a-4629-a394-f1988ff2e211] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0329 18:55:11.470267    8960 system_pods.go:61] "kube-controller-manager-skaffold-20220329185334-1328" [1503b833-88ef-4784-ac7a-dca21a732604] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0329 18:55:11.470267    8960 system_pods.go:61] "kube-scheduler-skaffold-20220329185334-1328" [2e466018-04b2-4c6d-a144-bd2f427f03f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0329 18:55:11.470267    8960 system_pods.go:74] duration metric: took 85.9402ms to wait for pod list to return data ...
	I0329 18:55:11.470267    8960 kubeadm.go:548] duration metric: took 2.3839358s to wait for : map[apiserver:true system_pods:true] ...
	I0329 18:55:11.470267    8960 node_conditions.go:102] verifying NodePressure condition ...
	I0329 18:55:11.563703    8960 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0329 18:55:11.563888    8960 node_conditions.go:123] node cpu capacity is 16
	I0329 18:55:11.563888    8960 node_conditions.go:105] duration metric: took 93.6203ms to run NodePressure ...
	I0329 18:55:11.563888    8960 start.go:213] waiting for startup goroutines ...
	I0329 18:55:11.763620    8960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.399407s)
	I0329 18:55:11.900369    8960 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 18:55:11.900369    8960 addons.go:417] enableAddons completed in 2.8140356s
	I0329 18:55:12.125481    8960 start.go:498] kubectl: 1.18.2, cluster: 1.23.5 (minor skew: 5)
	I0329 18:55:12.139399    8960 out.go:176] 
	W0329 18:55:12.139947    8960 out.go:241] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5.
	I0329 18:55:12.145653    8960 out.go:176]   - Want kubectl v1.23.5? Try 'minikube kubectl -- get pods -A'
	I0329 18:55:12.148920    8960 out.go:176] * Done! kubectl is now configured to use "skaffold-20220329185334-1328" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 18:54:27 UTC, end at Tue 2022-03-29 18:55:33 UTC. --
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.395309900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.398642100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.398757800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.398795000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.398889200Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.428374500Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.446150800Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.446258200Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.446276900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.446285500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.446294100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.446302200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.446639800Z" level=info msg="Loading containers: start."
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.632645400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.710253000Z" level=info msg="Loading containers: done."
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.760413500Z" level=info msg="Docker daemon" commit=906f57f graphdriver(s)=overlay2 version=20.10.13
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.760651600Z" level=info msg="Daemon has completed initialization"
	Mar 29 18:54:37 skaffold-20220329185334-1328 systemd[1]: Started Docker Application Container Engine.
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.820352900Z" level=info msg="API listen on [::]:2376"
	Mar 29 18:54:37 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:54:37.831270300Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 29 18:55:22 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:55:22.044846400Z" level=info msg="parsed scheme: \"\"" module=grpc
	Mar 29 18:55:22 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:55:22.045057900Z" level=info msg="scheme \"\" not registered, fallback to default scheme" module=grpc
	Mar 29 18:55:22 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:55:22.045091000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{localhost  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Mar 29 18:55:22 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:55:22.045201400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Mar 29 18:55:24 skaffold-20220329185334-1328 dockerd[471]: time="2022-03-29T18:55:24.345863000Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {localhost  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing only one connection allowed\". Reconnecting..." module=grpc
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ce86367254c98       a4ca41631cc7a       11 seconds ago      Running             coredns                   0                   fe054ccb0e34e
	e886e07b58023       3c53fa8541f95       13 seconds ago      Running             kube-proxy                0                   175cd9df0342a
	4cecbba7bd7e4       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   d687f349474c2
	bca1607637561       3fc1d62d65872       39 seconds ago      Running             kube-apiserver            0                   19236b397a641
	3e04234375358       884d49d6d8c9f       39 seconds ago      Running             kube-scheduler            0                   89ca6580ecfc4
	1e19e8a356f99       25f8c7f3da61c       39 seconds ago      Running             etcd                      0                   8fb0862149e66
	0037d4906a6e0       b0c9e5e4dbb14       39 seconds ago      Running             kube-controller-manager   0                   7874eeec75a03
	
	* 
	* ==> coredns [ce86367254c9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               skaffold-20220329185334-1328
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-20220329185334-1328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=skaffold-20220329185334-1328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T18_55_06_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 18:55:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-20220329185334-1328
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 18:55:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 18:55:18 +0000   Tue, 29 Mar 2022 18:54:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 18:55:18 +0000   Tue, 29 Mar 2022 18:54:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 18:55:18 +0000   Tue, 29 Mar 2022 18:54:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 29 Mar 2022 18:55:18 +0000   Tue, 29 Mar 2022 18:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    skaffold-20220329185334-1328
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                140a143b31184b58be947b52a01fff83
	  Boot ID:                    c6888bb0-0d7a-4902-95ce-20313bf24adc
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-9q4cm                                 100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     14s
	  kube-system                 etcd-skaffold-20220329185334-1328                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         28s
	  kube-system                 kube-apiserver-skaffold-20220329185334-1328             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-controller-manager-skaffold-20220329185334-1328    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-proxy-2zhbj                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 kube-scheduler-skaffold-20220329185334-1328             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 11s                kube-proxy  
	  Normal  NodeHasSufficientMemory  41s (x5 over 41s)  kubelet     Node skaffold-20220329185334-1328 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 41s)  kubelet     Node skaffold-20220329185334-1328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 41s)  kubelet     Node skaffold-20220329185334-1328 status is now: NodeHasSufficientPID
	  Normal  Starting                 26s                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    26s                kubelet     Node skaffold-20220329185334-1328 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s                kubelet     Node skaffold-20220329185334-1328 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             26s                kubelet     Node skaffold-20220329185334-1328 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  26s                kubelet     Node skaffold-20220329185334-1328 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  25s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                15s                kubelet     Node skaffold-20220329185334-1328 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Mar29 18:29] WSL2: Performing memory compaction.
	[Mar29 18:30] WSL2: Performing memory compaction.
	[Mar29 18:31] WSL2: Performing memory compaction.
	[Mar29 18:32] WSL2: Performing memory compaction.
	[Mar29 18:33] WSL2: Performing memory compaction.
	[Mar29 18:34] WSL2: Performing memory compaction.
	[Mar29 18:35] WSL2: Performing memory compaction.
	[Mar29 18:36] WSL2: Performing memory compaction.
	[Mar29 18:37] WSL2: Performing memory compaction.
	[Mar29 18:38] WSL2: Performing memory compaction.
	[Mar29 18:39] WSL2: Performing memory compaction.
	[Mar29 18:41] WSL2: Performing memory compaction.
	[Mar29 18:42] WSL2: Performing memory compaction.
	[Mar29 18:43] WSL2: Performing memory compaction.
	[Mar29 18:44] WSL2: Performing memory compaction.
	[Mar29 18:45] WSL2: Performing memory compaction.
	[Mar29 18:46] WSL2: Performing memory compaction.
	[Mar29 18:47] WSL2: Performing memory compaction.
	[Mar29 18:48] WSL2: Performing memory compaction.
	[Mar29 18:50] WSL2: Performing memory compaction.
	[Mar29 18:51] WSL2: Performing memory compaction.
	[Mar29 18:52] WSL2: Performing memory compaction.
	[Mar29 18:53] WSL2: Performing memory compaction.
	[Mar29 18:54] WSL2: Performing memory compaction.
	[Mar29 18:55] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [1e19e8a356f9] <==
	* {"level":"info","ts":"2022-03-29T18:55:19.230Z","caller":"traceutil/trace.go:171","msg":"trace[828803183] linearizableReadLoop","detail":"{readStateIndex:441; appliedIndex:441; }","duration":"101.682ms","start":"2022-03-29T18:55:19.128Z","end":"2022-03-29T18:55:19.229Z","steps":["trace[828803183] 'read index received'  (duration: 101.1681ms)","trace[828803183] 'applied index is now lower than readState.Index'  (duration: 508.3µs)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T18:55:19.265Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"331.7581ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:254"}
	{"level":"warn","ts":"2022-03-29T18:55:19.265Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"230.5109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2022-03-29T18:55:19.265Z","caller":"traceutil/trace.go:171","msg":"trace[1488823214] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:431; }","duration":"331.9475ms","start":"2022-03-29T18:55:18.934Z","end":"2022-03-29T18:55:19.265Z","steps":["trace[1488823214] 'agreement among raft nodes before linearized reading'  (duration: 296.6172ms)","trace[1488823214] 'range keys from in-memory index tree'  (duration: 35.0815ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T18:55:19.266Z","caller":"traceutil/trace.go:171","msg":"trace[1399017205] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:431; }","duration":"230.5656ms","start":"2022-03-29T18:55:19.035Z","end":"2022-03-29T18:55:19.265Z","steps":["trace[1399017205] 'agreement among raft nodes before linearized reading'  (duration: 194.5563ms)","trace[1399017205] 'range keys from in-memory index tree'  (duration: 35.9192ms)"],"step_count":2}
	{"level":"warn","ts":"2022-03-29T18:55:19.266Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-03-29T18:55:18.933Z","time spent":"332.1006ms","remote":"127.0.0.1:58744","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":278,"request content":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" "}
	{"level":"warn","ts":"2022-03-29T18:55:19.266Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"238.361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3640"}
	{"level":"info","ts":"2022-03-29T18:55:19.266Z","caller":"traceutil/trace.go:171","msg":"trace[559821554] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:431; }","duration":"238.4101ms","start":"2022-03-29T18:55:19.027Z","end":"2022-03-29T18:55:19.266Z","steps":["trace[559821554] 'agreement among raft nodes before linearized reading'  (duration: 202.2386ms)","trace[559821554] 'range keys from in-memory index tree'  (duration: 36.097ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T18:55:19.266Z","caller":"traceutil/trace.go:171","msg":"trace[1559834018] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"127.0281ms","start":"2022-03-29T18:55:19.139Z","end":"2022-03-29T18:55:19.266Z","steps":["trace[1559834018] 'process raft request'  (duration: 90.4762ms)","trace[1559834018] 'compare'  (duration: 36.255ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T18:55:19.266Z","caller":"traceutil/trace.go:171","msg":"trace[1742794888] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"127.1116ms","start":"2022-03-29T18:55:19.139Z","end":"2022-03-29T18:55:19.266Z","steps":["trace[1742794888] 'process raft request'  (duration: 126.622ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T18:55:19.266Z","caller":"traceutil/trace.go:171","msg":"trace[1587416928] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"127.0473ms","start":"2022-03-29T18:55:19.139Z","end":"2022-03-29T18:55:19.266Z","steps":["trace[1587416928] 'process raft request'  (duration: 126.5473ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T18:55:19.267Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"127.4641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-2zhbj\" ","response":"range_response_count:1 size:3451"}
	{"level":"info","ts":"2022-03-29T18:55:19.267Z","caller":"traceutil/trace.go:171","msg":"trace[1848342149] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-2zhbj; range_end:; response_count:1; response_revision:435; }","duration":"127.5136ms","start":"2022-03-29T18:55:19.139Z","end":"2022-03-29T18:55:19.267Z","steps":["trace[1848342149] 'agreement among raft nodes before linearized reading'  (duration: 127.4106ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T18:55:19.267Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.8074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2884"}
	{"level":"info","ts":"2022-03-29T18:55:19.267Z","caller":"traceutil/trace.go:171","msg":"trace[84410558] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:435; }","duration":"124.087ms","start":"2022-03-29T18:55:19.143Z","end":"2022-03-29T18:55:19.267Z","steps":["trace[84410558] 'agreement among raft nodes before linearized reading'  (duration: 123.7593ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T18:55:19.454Z","caller":"traceutil/trace.go:171","msg":"trace[1630856081] transaction","detail":"{read_only:false; number_of_response:1; response_revision:439; }","duration":"114.1631ms","start":"2022-03-29T18:55:19.339Z","end":"2022-03-29T18:55:19.454Z","steps":["trace[1630856081] 'process raft request'  (duration: 87.5165ms)","trace[1630856081] 'compare'  (duration: 26.0666ms)"],"step_count":2}
	{"level":"info","ts":"2022-03-29T18:55:19.454Z","caller":"traceutil/trace.go:171","msg":"trace[2111710240] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"112.689ms","start":"2022-03-29T18:55:19.341Z","end":"2022-03-29T18:55:19.454Z","steps":["trace[2111710240] 'process raft request'  (duration: 112.1156ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T18:55:19.454Z","caller":"traceutil/trace.go:171","msg":"trace[1475625392] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"113.7653ms","start":"2022-03-29T18:55:19.340Z","end":"2022-03-29T18:55:19.454Z","steps":["trace[1475625392] 'process raft request'  (duration: 113.3337ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T18:55:19.454Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.9697ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:226"}
	{"level":"info","ts":"2022-03-29T18:55:19.454Z","caller":"traceutil/trace.go:171","msg":"trace[861557442] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:442; }","duration":"112.1453ms","start":"2022-03-29T18:55:19.342Z","end":"2022-03-29T18:55:19.454Z","steps":["trace[861557442] 'agreement among raft nodes before linearized reading'  (duration: 111.523ms)"],"step_count":1}
	{"level":"warn","ts":"2022-03-29T18:55:19.633Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.9362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-03-29T18:55:19.633Z","caller":"traceutil/trace.go:171","msg":"trace[2129081723] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:0; response_revision:443; }","duration":"103.1017ms","start":"2022-03-29T18:55:19.530Z","end":"2022-03-29T18:55:19.633Z","steps":["trace[2129081723] 'agreement among raft nodes before linearized reading'  (duration: 96.3922ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T18:55:19.634Z","caller":"traceutil/trace.go:171","msg":"trace[154880297] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"101.6423ms","start":"2022-03-29T18:55:19.532Z","end":"2022-03-29T18:55:19.634Z","steps":["trace[154880297] 'process raft request'  (duration: 94.8198ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T18:55:19.634Z","caller":"traceutil/trace.go:171","msg":"trace[897300609] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"101.8994ms","start":"2022-03-29T18:55:19.532Z","end":"2022-03-29T18:55:19.634Z","steps":["trace[897300609] 'process raft request'  (duration: 101.3153ms)"],"step_count":1}
	{"level":"info","ts":"2022-03-29T18:55:19.634Z","caller":"traceutil/trace.go:171","msg":"trace[586622084] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"100.8232ms","start":"2022-03-29T18:55:19.533Z","end":"2022-03-29T18:55:19.634Z","steps":["trace[586622084] 'process raft request'  (duration: 100.4319ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:55:33 up  1:44,  0 users,  load average: 1.82, 1.57, 1.46
	Linux skaffold-20220329185334-1328 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [bca160763756] <==
	* I0329 18:55:01.227909       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0329 18:55:01.228158       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0329 18:55:01.228165       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0329 18:55:01.228166       1 cache.go:39] Caches are synced for autoregister controller
	I0329 18:55:01.228278       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0329 18:55:01.331885       1 controller.go:611] quota admission added evaluator for: namespaces
	I0329 18:55:02.126771       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0329 18:55:02.134744       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0329 18:55:02.136325       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0329 18:55:02.144139       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0329 18:55:02.144241       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0329 18:55:03.851837       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0329 18:55:04.003717       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0329 18:55:04.182215       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0329 18:55:04.236556       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0329 18:55:04.238450       1 controller.go:611] quota admission added evaluator for: endpoints
	I0329 18:55:04.245292       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0329 18:55:04.252685       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0329 18:55:06.227960       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0329 18:55:06.252164       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0329 18:55:06.349656       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0329 18:55:07.437253       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0329 18:55:18.827293       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0329 18:55:18.834835       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0329 18:55:22.434341       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [0037d4906a6e] <==
	* I0329 18:55:18.135316       1 shared_informer.go:247] Caches are synced for TTL 
	I0329 18:55:18.135525       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0329 18:55:18.135808       1 shared_informer.go:247] Caches are synced for job 
	I0329 18:55:18.135979       1 shared_informer.go:247] Caches are synced for deployment 
	I0329 18:55:18.137748       1 range_allocator.go:374] Set node skaffold-20220329185334-1328 PodCIDR to [10.244.0.0/24]
	I0329 18:55:18.141712       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0329 18:55:18.227550       1 shared_informer.go:247] Caches are synced for HPA 
	I0329 18:55:18.227664       1 shared_informer.go:247] Caches are synced for endpoint 
	I0329 18:55:18.230415       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0329 18:55:18.248032       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 18:55:18.327113       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0329 18:55:18.327248       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0329 18:55:18.327281       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0329 18:55:18.327290       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0329 18:55:18.329605       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0329 18:55:18.331961       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 18:55:18.340359       1 event.go:294] "Event occurred" object="kube-system/etcd-skaffold-20220329185334-1328" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0329 18:55:18.341066       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-skaffold-20220329185334-1328" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0329 18:55:18.729261       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 18:55:18.729396       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 18:55:18.742674       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 18:55:18.836956       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 1"
	I0329 18:55:18.934322       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2zhbj"
	I0329 18:55:19.458935       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9q4cm"
	I0329 18:55:23.135832       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [e886e07b5802] <==
	* E0329 18:55:22.138410       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0329 18:55:22.142782       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0329 18:55:22.145651       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0329 18:55:22.148077       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0329 18:55:22.150813       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0329 18:55:22.153283       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0329 18:55:22.235095       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 18:55:22.235151       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 18:55:22.235298       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 18:55:22.427113       1 server_others.go:206] "Using iptables Proxier"
	I0329 18:55:22.427315       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 18:55:22.427338       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 18:55:22.427366       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 18:55:22.429179       1 server.go:656] "Version info" version="v1.23.5"
	I0329 18:55:22.430375       1 config.go:317] "Starting service config controller"
	I0329 18:55:22.430422       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 18:55:22.431866       1 config.go:226] "Starting endpoint slice config controller"
	I0329 18:55:22.431990       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 18:55:22.531653       1 shared_informer.go:247] Caches are synced for service config 
	I0329 18:55:22.533006       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [3e0423437535] <==
	* W0329 18:55:02.329274       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0329 18:55:02.329304       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0329 18:55:02.452649       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0329 18:55:02.452768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0329 18:55:02.493024       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0329 18:55:02.493153       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0329 18:55:02.528226       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0329 18:55:02.528348       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0329 18:55:02.586828       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0329 18:55:02.586966       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0329 18:55:02.628415       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 18:55:02.628567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 18:55:02.640716       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0329 18:55:02.640830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0329 18:55:02.830243       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 18:55:02.830424       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 18:55:02.865240       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 18:55:02.865378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 18:55:02.927776       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 18:55:02.927928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 18:55:02.930386       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 18:55:02.930497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0329 18:55:02.947650       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 18:55:02.947776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0329 18:55:05.345698       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 18:54:27 UTC, end at Tue 2022-03-29 18:55:34 UTC. --
	Mar 29 18:55:08 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:08.432299    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1eac81c445c376c85c21a1b9889b3d1f-ca-certs\") pod \"kube-controller-manager-skaffold-20220329185334-1328\" (UID: \"1eac81c445c376c85c21a1b9889b3d1f\") " pod="kube-system/kube-controller-manager-skaffold-20220329185334-1328"
	Mar 29 18:55:08 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:08.432331    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1eac81c445c376c85c21a1b9889b3d1f-k8s-certs\") pod \"kube-controller-manager-skaffold-20220329185334-1328\" (UID: \"1eac81c445c376c85c21a1b9889b3d1f\") " pod="kube-system/kube-controller-manager-skaffold-20220329185334-1328"
	Mar 29 18:55:08 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:08.432361    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00cc619f0eea16b2a8431f27f93e4bef-kubeconfig\") pod \"kube-scheduler-skaffold-20220329185334-1328\" (UID: \"00cc619f0eea16b2a8431f27f93e4bef\") " pod="kube-system/kube-scheduler-skaffold-20220329185334-1328"
	Mar 29 18:55:08 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:08.432392    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e09083fb41043b79f6f53d5b4a65a0a2-k8s-certs\") pod \"kube-apiserver-skaffold-20220329185334-1328\" (UID: \"e09083fb41043b79f6f53d5b4a65a0a2\") " pod="kube-system/kube-apiserver-skaffold-20220329185334-1328"
	Mar 29 18:55:08 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:08.432408    2019 reconciler.go:157] "Reconciler: start to sync state"
	Mar 29 18:55:18 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:18.229493    2019 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 29 18:55:18 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:18.230238    2019 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 29 18:55:18 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:18.231047    2019 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 29 18:55:18 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:18.542302    2019 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 18:55:18 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:18.732431    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/35f58e71-a7e6-4c52-99ec-c9ed7f6c65ff-tmp\") pod \"storage-provisioner\" (UID: \"35f58e71-a7e6-4c52-99ec-c9ed7f6c65ff\") " pod="kube-system/storage-provisioner"
	Mar 29 18:55:18 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:18.732663    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drlxc\" (UniqueName: \"kubernetes.io/projected/35f58e71-a7e6-4c52-99ec-c9ed7f6c65ff-kube-api-access-drlxc\") pod \"storage-provisioner\" (UID: \"35f58e71-a7e6-4c52-99ec-c9ed7f6c65ff\") " pod="kube-system/storage-provisioner"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.130849    2019 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.237320    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a59a557-f8e1-47cd-928d-893383c4b94c-kube-proxy\") pod \"kube-proxy-2zhbj\" (UID: \"2a59a557-f8e1-47cd-928d-893383c4b94c\") " pod="kube-system/kube-proxy-2zhbj"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.237595    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a59a557-f8e1-47cd-928d-893383c4b94c-lib-modules\") pod \"kube-proxy-2zhbj\" (UID: \"2a59a557-f8e1-47cd-928d-893383c4b94c\") " pod="kube-system/kube-proxy-2zhbj"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.237646    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a59a557-f8e1-47cd-928d-893383c4b94c-xtables-lock\") pod \"kube-proxy-2zhbj\" (UID: \"2a59a557-f8e1-47cd-928d-893383c4b94c\") " pod="kube-system/kube-proxy-2zhbj"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.237679    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvnz\" (UniqueName: \"kubernetes.io/projected/2a59a557-f8e1-47cd-928d-893383c4b94c-kube-api-access-shvnz\") pod \"kube-proxy-2zhbj\" (UID: \"2a59a557-f8e1-47cd-928d-893383c4b94c\") " pod="kube-system/kube-proxy-2zhbj"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.534611    2019 topology_manager.go:200] "Topology Admit Handler"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.641759    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58c47ec6-4dec-4f7a-bcfe-17c17a60246c-config-volume\") pod \"coredns-64897985d-9q4cm\" (UID: \"58c47ec6-4dec-4f7a-bcfe-17c17a60246c\") " pod="kube-system/coredns-64897985d-9q4cm"
	Mar 29 18:55:19 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:19.642066    2019 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vplsl\" (UniqueName: \"kubernetes.io/projected/58c47ec6-4dec-4f7a-bcfe-17c17a60246c-kube-api-access-vplsl\") pod \"coredns-64897985d-9q4cm\" (UID: \"58c47ec6-4dec-4f7a-bcfe-17c17a60246c\") " pod="kube-system/coredns-64897985d-9q4cm"
	Mar 29 18:55:20 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:20.316759    2019 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d687f349474c23645eff55e335a11276e27229033d3472c0d625064a990e07a3"
	Mar 29 18:55:21 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:21.928322    2019 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fe054ccb0e34e0e2876f7a55302dedd35b0b1e6ecc3fb308fbc8bfc49c8c0c1f"
	Mar 29 18:55:21 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:21.932004    2019 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9q4cm through plugin: invalid network status for"
	Mar 29 18:55:22 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:22.029281    2019 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="175cd9df0342a0e653945da24ddae81c57b2762c79ccf08e18b48c1871b09120"
	Mar 29 18:55:23 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:23.044161    2019 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9q4cm through plugin: invalid network status for"
	Mar 29 18:55:24 skaffold-20220329185334-1328 kubelet[2019]: I0329 18:55:24.162369    2019 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9q4cm through plugin: invalid network status for"
	
	* 
	* ==> storage-provisioner [4cecbba7bd7e] <==
	* I0329 18:55:21.337151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p skaffold-20220329185334-1328 -n skaffold-20220329185334-1328
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p skaffold-20220329185334-1328 -n skaffold-20220329185334-1328: (4.2273831s)
helpers_test.go:262: (dbg) Run:  kubectl --context skaffold-20220329185334-1328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestSkaffold]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context skaffold-20220329185334-1328 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context skaffold-20220329185334-1328 describe pod : exit status 1 (234.6126ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context skaffold-20220329185334-1328 describe pod : exit status 1
helpers_test.go:176: Cleaning up "skaffold-20220329185334-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20220329185334-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20220329185334-1328: (11.4220852s)
--- FAIL: TestSkaffold (136.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (18.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe profile list: exit status 1 (13.5449691s)
no_kubernetes_test.go:172: Profile list failed : "out/minikube-windows-amd64.exe profile list" : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestNoKubernetes/serial/ProfileList]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect NoKubernetes-20220329185711-1328
helpers_test.go:236: (dbg) docker inspect NoKubernetes-20220329185711-1328:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0df9d62a0a4564136e43f613120932be3694918b8940b1e071b479df77cbda96",
	        "Created": "2022-03-29T19:01:32.1518242Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 144331,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T19:01:35.7769012Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/0df9d62a0a4564136e43f613120932be3694918b8940b1e071b479df77cbda96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0df9d62a0a4564136e43f613120932be3694918b8940b1e071b479df77cbda96/hostname",
	        "HostsPath": "/var/lib/docker/containers/0df9d62a0a4564136e43f613120932be3694918b8940b1e071b479df77cbda96/hosts",
	        "LogPath": "/var/lib/docker/containers/0df9d62a0a4564136e43f613120932be3694918b8940b1e071b479df77cbda96/0df9d62a0a4564136e43f613120932be3694918b8940b1e071b479df77cbda96-json.log",
	        "Name": "/NoKubernetes-20220329185711-1328",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-20220329185711-1328:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-20220329185711-1328",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 17091788800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 17091788800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/362695c86a8626b5b62e669b84c4b130df5c5d31d3df1e7f06b064ee5f36645f-init/diff:/var/lib/docker/overlay2/4eae5e38ad3553f9f0fde74ad732117b98cb0e1af550ecd7ce386997eede943f/diff:/var/lib/docker/overlay2/6789b74c71a0164bd481c99dc53318989abbcdc33b160f5d04f44aee12c80671/diff:/var/lib/docker/overlay2/91c6ac2f9a1035ebae76daccc83a3cafe5d26b2bd6b60ad54a6e29588a7003f8/diff:/var/lib/docker/overlay2/a916d7329da723d8397bfda8e20f2beb9156ceece20236242a811e43984bbfeb/diff:/var/lib/docker/overlay2/b046f566fd53b4f2f6d2c347c752b47f6c1a64316baeaa8c0fda825346ef7aba/diff:/var/lib/docker/overlay2/13a76ad56283b88db0508d09cc281c66801cee04cdbdd8f00827788d5231a025/diff:/var/lib/docker/overlay2/8e95b9ffc444e9f6b52db61f07f0a93bb3feb51b5d9dab6b7df487fef8d277f6/diff:/var/lib/docker/overlay2/bf807f6bedece6f8033221974e6b2ffdf94a6f9320d4f09337ed51b411f8f999/diff:/var/lib/docker/overlay2/d8184ca2707eba09a4f6bd90cad4795ce0f226f863f2d84723287ad76f1158d8/diff:/var/lib/docker/overlay2/390685
8e1746cab95814956b950325758e0765c0a6597b3d9062a4c36ab409be/diff:/var/lib/docker/overlay2/128db97cb7dee3d09e506aaaf97a45b5a647d8eb90782f5dd444aec15ff525da/diff:/var/lib/docker/overlay2/713bbf0f0ba84035f3a06b59c058ccfe9e7639f2ecb9d3db244e1adec7b6c46b/diff:/var/lib/docker/overlay2/6a820465cd423660c71cbb6741a47e4619efcf0010ac49bd49146501b9ac4925/diff:/var/lib/docker/overlay2/20c66385f330043e2c50b8193a59172de08776bbabdca289cb51c1b5f17e9b98/diff:/var/lib/docker/overlay2/7b2439fa81d8ff403bd5767752380391449aeba92453e1846fd36cfce9e6de61/diff:/var/lib/docker/overlay2/ee227ab74915b1419cfbc67f2b14b08cf564b4a38a39b157de2c65250a9172bf/diff:/var/lib/docker/overlay2/0b92e2531a28b01133cc2ab65802b03c04ef0213e850ac8558c9c4071fd018dd/diff:/var/lib/docker/overlay2/3de4968e9a773e45d79b096d23038e48758528adce69f14e7ff3a93bbd3192d7/diff:/var/lib/docker/overlay2/92eb87a3831ecebb34eb1e0ea7a71af9883f8426f35387845769f5fe75f04a52/diff:/var/lib/docker/overlay2/82a4c6fc3869bde23593a8490af76e406ad5a27ef1c30a38b481944390f7466e/diff:/var/lib/d
ocker/overlay2/6c957b5c04708287c2261d895a0f4563f25cc766eb21913c4ceb36f27a04914e/diff:/var/lib/docker/overlay2/21df3fb223398ef06fb62c4617e3487f0ac955e4f38ee3d2d72c9da488d436c7/diff:/var/lib/docker/overlay2/ddaf18203a4027208ea592b9716939849af0aa5d2cac57d2b0c36382e078f483/diff:/var/lib/docker/overlay2/9a82b4c496462c1bf59ccb096f886e61674d92540023b7fed618682584358cbf/diff:/var/lib/docker/overlay2/62a8d9c5758a93af517541ab9d841f9415f55ca5503844371b7e35d47838dbb0/diff:/var/lib/docker/overlay2/c17d3885b54e341402c392175e2ab4ff1ab038acafe82a8090b1725613597f95/diff:/var/lib/docker/overlay2/d1401e4d6e04dded3c7d0335e32d0eb6cf2d7c19d21da53b836d591dddac8961/diff:/var/lib/docker/overlay2/7c4934c7f4f9cce1a35b340eebbc473f9bb33153f61f1c0454bffd0b2ae5a37e/diff:/var/lib/docker/overlay2/02d6bd07f6dbb7198d2c42fe26ff2efbabb9a889dfa0b79fd05e06a021bc81b4/diff:/var/lib/docker/overlay2/137f83b86485992317df9126e714cd331df51131ac4990d1040cf54cace6506e/diff:/var/lib/docker/overlay2/75d1117a1f5f001df3981193d1251ab8426eb4c100c9c1bbb946f0c2e0e
1d73c/diff:/var/lib/docker/overlay2/b20542be533b230be3dee06af0364759a81f26397d9371a7052efdac48fc1a3e/diff:/var/lib/docker/overlay2/b6103a89043f339bfc18a195b11f4a57f6042806725aac9d6b8db0e2af4fe01e/diff:/var/lib/docker/overlay2/69041f5eef389b325dd43fa81731c884299e2cb880a57ba904b8752c12446236/diff:/var/lib/docker/overlay2/8bc9de0232e5ba86f129e746c52a7f53836827a1a9cfc8e0c731d81af17b92a4/diff:/var/lib/docker/overlay2/5494bafa4607149ff46b2ed95fd9c86139339508d3c27bf32346963a41ae95f1/diff:/var/lib/docker/overlay2/daaadc749b2e3fb99bb23ec4d0a908e70deef3f9caff12f7b3fa29a57086e13a/diff:/var/lib/docker/overlay2/35b939c7fd0daf3717995c2aff595f96a741b48ae2da6b523aeda782ea3922e9/diff:/var/lib/docker/overlay2/b5a01cc1c410e803d28949ef6f35b55ac04473d89beb188d9d4866287b7cbbee/diff:/var/lib/docker/overlay2/c26c0af38634a15c6619c42bd2e5ec804bab550ff8078c084ba220030d8f4b93/diff:/var/lib/docker/overlay2/c12adb9eba87b6903ac0b2e16234b6a4f11a66d10d30d5379b19963433b76506/diff:/var/lib/docker/overlay2/537ea8129185a2faaaafa08ee553e15fe2cee0
4e80dab99066f779573324b53c/diff:/var/lib/docker/overlay2/ba74848f80f8d422a61241b3778f2395a32e73958e6a6dfddf5724bd0367dc67/diff:/var/lib/docker/overlay2/be8013e1c023e08543e181408137e02941d2b05181428b80bf154108c0cf48a5/diff:/var/lib/docker/overlay2/895568f040b89c0f90e7f4e41a1a77ca025acd0a0e0682a242f830a2e9c4ede7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/362695c86a8626b5b62e669b84c4b130df5c5d31d3df1e7f06b064ee5f36645f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/362695c86a8626b5b62e669b84c4b130df5c5d31d3df1e7f06b064ee5f36645f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/362695c86a8626b5b62e669b84c4b130df5c5d31d3df1e7f06b064ee5f36645f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-20220329185711-1328",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-20220329185711-1328/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-20220329185711-1328",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-20220329185711-1328",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-20220329185711-1328",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "969412e344de1dce180b263afd9966bb11ca2a188053d485a01c6d49c546d75f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56503"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56504"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56505"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/969412e344de",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-20220329185711-1328": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0df9d62a0a45",
	                        "NoKubernetes-20220329185711-1328"
	                    ],
	                    "NetworkID": "8be16a2a8179c6e35dd3501bf8ba047b989dc797f3ef5aff155aa84d1e9e0d91",
	                    "EndpointID": "bbf18880325b87e5d41ff04d8bd243c44a5fa640647861fa77246ecdb396a5cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220329185711-1328 -n NoKubernetes-20220329185711-1328

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220329185711-1328 -n NoKubernetes-20220329185711-1328: exit status 6 (4.1126992s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 19:02:15.828047    7912 status.go:413] kubeconfig endpoint: extract IP: "NoKubernetes-20220329185711-1328" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "NoKubernetes-20220329185711-1328" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/ProfileList (18.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (931.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (15m30.8350215s)

                                                
                                                
-- stdout --
	* [cilium-20220329190230-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node cilium-20220329190230-1328 in cluster cilium-20220329190230-1328
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220329190230-1328" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 19:13:20.329538    8332 out.go:297] Setting OutFile to fd 1952 ...
	I0329 19:13:20.407084    8332 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:13:20.407084    8332 out.go:310] Setting ErrFile to fd 1956...
	I0329 19:13:20.407084    8332 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:13:20.418084    8332 out.go:304] Setting JSON to false
	I0329 19:13:20.433072    8332 start.go:114] hostinfo: {"hostname":"minikube8","uptime":8397,"bootTime":1648572803,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 19:13:20.433072    8332 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 19:13:20.439072    8332 out.go:176] * [cilium-20220329190230-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 19:13:20.439072    8332 notify.go:193] Checking for updates...
	I0329 19:13:20.447085    8332 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:13:20.453072    8332 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 19:13:20.456069    8332 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 19:13:20.459071    8332 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 19:13:20.460080    8332 config.go:176] Loaded profile config "auto-20220329190226-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:13:20.460080    8332 config.go:176] Loaded profile config "cert-expiration-20220329190729-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:13:20.462071    8332 config.go:176] Loaded profile config "force-systemd-env-20220329190726-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:13:20.462071    8332 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 19:13:22.802095    8332 docker.go:137] docker version: linux-20.10.13
	I0329 19:13:22.811013    8332 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:13:23.572130    8332 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:75 OomKillDisable:true NGoroutines:55 SystemTime:2022-03-29 19:13:23.1952777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:13:23.866992    8332 out.go:176] * Using the docker driver based on user configuration
	I0329 19:13:23.867137    8332 start.go:283] selected driver: docker
	I0329 19:13:23.867137    8332 start.go:800] validating driver "docker" against <nil>
	I0329 19:13:23.867206    8332 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 19:13:24.000841    8332 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:13:24.747440    8332 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:75 OomKillDisable:true NGoroutines:55 SystemTime:2022-03-29 19:13:24.3531151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:13:24.747440    8332 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 19:13:24.748731    8332 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 19:13:24.748731    8332 cni.go:93] Creating CNI manager for "cilium"
	I0329 19:13:24.748731    8332 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0329 19:13:24.748731    8332 start_flags.go:306] config:
	{Name:cilium-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cilium-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 19:13:24.754435    8332 out.go:176] * Starting control plane node cilium-20220329190230-1328 in cluster cilium-20220329190230-1328
	I0329 19:13:24.754435    8332 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 19:13:24.758742    8332 out.go:176] * Pulling base image ...
	I0329 19:13:24.758742    8332 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:13:24.758742    8332 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 19:13:24.758742    8332 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 19:13:24.758742    8332 cache.go:57] Caching tarball of preloaded images
	I0329 19:13:24.759439    8332 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 19:13:24.759439    8332 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 19:13:24.759439    8332 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\config.json ...
	I0329 19:13:24.759439    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\config.json: {Name:mkda0bb1e0c9f1f765c93dfe294b192747c3f86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:13:25.224346    8332 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 19:13:25.224346    8332 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 19:13:25.224346    8332 cache.go:208] Successfully downloaded all kic artifacts
	I0329 19:13:25.224346    8332 start.go:348] acquiring machines lock for cilium-20220329190230-1328: {Name:mk08cde73fc530b56f1c9b9e81f20c149672fd7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 19:13:25.224346    8332 start.go:352] acquired machines lock for "cilium-20220329190230-1328" in 0s
	I0329 19:13:25.224346    8332 start.go:90] Provisioning new machine with config: &{Name:cilium-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cilium-20220329190230-1328 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:13:25.224346    8332 start.go:127] createHost starting for "" (driver="docker")
	I0329 19:13:25.230387    8332 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 19:13:25.230387    8332 start.go:161] libmachine.API.Create for "cilium-20220329190230-1328" (driver="docker")
	I0329 19:13:25.230387    8332 client.go:168] LocalClient.Create starting
	I0329 19:13:25.231350    8332 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0329 19:13:25.231350    8332 main.go:130] libmachine: Decoding PEM data...
	I0329 19:13:25.231350    8332 main.go:130] libmachine: Parsing certificate...
	I0329 19:13:25.231350    8332 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0329 19:13:25.231350    8332 main.go:130] libmachine: Decoding PEM data...
	I0329 19:13:25.231350    8332 main.go:130] libmachine: Parsing certificate...
	I0329 19:13:25.240352    8332 cli_runner.go:133] Run: docker network inspect cilium-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 19:13:25.749343    8332 cli_runner.go:180] docker network inspect cilium-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:13:25.756354    8332 network_create.go:262] running [docker network inspect cilium-20220329190230-1328] to gather additional debugging logs...
	I0329 19:13:25.756354    8332 cli_runner.go:133] Run: docker network inspect cilium-20220329190230-1328
	W0329 19:13:26.254088    8332 cli_runner.go:180] docker network inspect cilium-20220329190230-1328 returned with exit code 1
	I0329 19:13:26.254088    8332 network_create.go:265] error running [docker network inspect cilium-20220329190230-1328]: docker network inspect cilium-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220329190230-1328
	I0329 19:13:26.254088    8332 network_create.go:267] output of [docker network inspect cilium-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220329190230-1328
	
	** /stderr **
	I0329 19:13:26.263853    8332 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 19:13:26.784896    8332 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000614168] misses:0}
	I0329 19:13:26.784896    8332 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:13:26.784896    8332 network_create.go:114] attempt to create docker network cilium-20220329190230-1328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 19:13:26.791904    8332 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220329190230-1328
	I0329 19:13:27.438778    8332 network_create.go:98] docker network cilium-20220329190230-1328 192.168.49.0/24 created
	I0329 19:13:27.438778    8332 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20220329190230-1328" container
	I0329 19:13:27.452766    8332 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 19:13:27.948967    8332 cli_runner.go:133] Run: docker volume create cilium-20220329190230-1328 --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true
	I0329 19:13:28.453353    8332 oci.go:102] Successfully created a docker volume cilium-20220329190230-1328
	I0329 19:13:28.459356    8332 cli_runner.go:133] Run: docker run --rm --name cilium-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --entrypoint /usr/bin/test -v cilium-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 19:13:32.039742    8332 cli_runner.go:186] Completed: docker run --rm --name cilium-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --entrypoint /usr/bin/test -v cilium-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (3.5793041s)
	I0329 19:13:32.039823    8332 oci.go:106] Successfully prepared a docker volume cilium-20220329190230-1328
	I0329 19:13:32.039823    8332 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:13:32.039886    8332 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 19:13:32.049239    8332 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 19:14:12.749715    8332 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (40.7002342s)
	I0329 19:14:12.749715    8332 kic.go:188] duration metric: took 40.709587 seconds to extract preloaded images to volume
	I0329 19:14:12.756712    8332 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:14:13.495778    8332 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:57 OomKillDisable:true NGoroutines:50 SystemTime:2022-03-29 19:14:13.1249753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:14:13.503752    8332 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 19:14:14.226348    8332 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.49.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	W0329 19:14:15.647360    8332 cli_runner.go:180] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.49.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 returned with exit code 125
	I0329 19:14:15.647502    8332 cli_runner.go:186] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.49.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: (1.42091s)
	I0329 19:14:15.647502    8332 client.go:171] LocalClient.Create took 50.4168154s
	I0329 19:14:17.675948    8332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 19:14:17.683242    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	W0329 19:14:18.192245    8332 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328 returned with exit code 1
	I0329 19:14:18.192245    8332 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:14:18.481672    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	W0329 19:14:18.930753    8332 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328 returned with exit code 1
	I0329 19:14:18.930810    8332 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:14:19.480676    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	W0329 19:14:19.958041    8332 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328 returned with exit code 1
	W0329 19:14:19.958041    8332 start.go:277] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0329 19:14:19.958041    8332 start.go:244] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:14:19.967041    8332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 19:14:19.973050    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	W0329 19:14:20.431288    8332 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328 returned with exit code 1
	I0329 19:14:20.431573    8332 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:14:20.681952    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	W0329 19:14:21.132225    8332 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328 returned with exit code 1
	I0329 19:14:21.132225    8332 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:14:21.496610    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	W0329 19:14:21.976903    8332 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328 returned with exit code 1
	W0329 19:14:21.976903    8332 start.go:292] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0329 19:14:21.976903    8332 start.go:249] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:14:21.976903    8332 start.go:130] duration metric: createHost completed in 56.7522201s
	I0329 19:14:21.976903    8332 start.go:81] releasing machines lock for "cilium-20220329190230-1328", held for 56.7522201s
	W0329 19:14:21.976903    8332 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.49.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 1
25
	stdout:
	c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef
	
	stderr:
	docker: Error response from daemon: network cilium-20220329190230-1328 not found.
	I0329 19:14:21.993889    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	W0329 19:14:22.496523    8332 start.go:575] delete host: Docker machine "cilium-20220329190230-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0329 19:14:22.496523    8332 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.49.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347
b5: exit status 125
	stdout:
	c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef
	
	stderr:
	docker: Error response from daemon: network cilium-20220329190230-1328 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.49.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef
	
	stderr:
	docker: Error response from daemon: network cilium-20220329190230-1328 not found.
	
	I0329 19:14:22.496523    8332 start.go:585] Will try again in 5 seconds ...
	I0329 19:14:27.501533    8332 start.go:348] acquiring machines lock for cilium-20220329190230-1328: {Name:mk08cde73fc530b56f1c9b9e81f20c149672fd7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 19:14:27.501916    8332 start.go:352] acquired machines lock for "cilium-20220329190230-1328" in 127µs
	I0329 19:14:27.502212    8332 start.go:94] Skipping create...Using existing machine configuration
	I0329 19:14:27.502212    8332 fix.go:55] fixHost starting: 
	I0329 19:14:27.519219    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:27.974851    8332 fix.go:108] recreateIfNeeded on cilium-20220329190230-1328: state= err=<nil>
	I0329 19:14:27.975281    8332 fix.go:113] machineExists: false. err=machine does not exist
	I0329 19:14:27.978605    8332 out.go:176] * docker "cilium-20220329190230-1328" container is missing, will recreate.
	I0329 19:14:27.979597    8332 delete.go:124] DEMOLISHING cilium-20220329190230-1328 ...
	I0329 19:14:28.006174    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:28.495492    8332 stop.go:79] host is in state 
	I0329 19:14:28.495809    8332 main.go:130] libmachine: Stopping "cilium-20220329190230-1328"...
	I0329 19:14:28.511813    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:29.007429    8332 kic_runner.go:93] Run: systemctl --version
	I0329 19:14:29.007429    8332 kic_runner.go:114] Args: [docker exec --privileged cilium-20220329190230-1328 systemctl --version]
	I0329 19:14:29.611531    8332 kic_runner.go:93] Run: sudo service kubelet stop
	I0329 19:14:29.611531    8332 kic_runner.go:114] Args: [docker exec --privileged cilium-20220329190230-1328 sudo service kubelet stop]
	I0329 19:14:30.237449    8332 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef is not running
	
	** /stderr **
	W0329 19:14:30.237597    8332 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef is not running
	I0329 19:14:30.255017    8332 kic_runner.go:93] Run: sudo service kubelet stop
	I0329 19:14:30.255017    8332 kic_runner.go:114] Args: [docker exec --privileged cilium-20220329190230-1328 sudo service kubelet stop]
	I0329 19:14:30.907393    8332 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef is not running
	
	** /stderr **
	W0329 19:14:30.907393    8332 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef is not running
	I0329 19:14:30.924393    8332 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0329 19:14:30.924393    8332 kic_runner.go:114] Args: [docker exec --privileged cilium-20220329190230-1328 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0329 19:14:31.573499    8332 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef is not running
	I0329 19:14:31.573499    8332 kic.go:466] successfully stopped kubernetes!
	I0329 19:14:31.592504    8332 kic_runner.go:93] Run: pgrep kube-apiserver
	I0329 19:14:31.592504    8332 kic_runner.go:114] Args: [docker exec --privileged cilium-20220329190230-1328 pgrep kube-apiserver]
	I0329 19:14:32.806350    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:36.321474    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:39.821599    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:43.327423    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:46.812356    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:50.279659    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:53.784031    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:14:57.323769    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:00.865625    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:04.364257    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:07.843377    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:11.374112    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:14.833984    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:18.346018    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:21.839831    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:25.303663    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:28.762213    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:32.264160    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:35.729405    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:39.512291    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:42.979358    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:46.477279    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:49.989991    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:53.479504    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:56.948193    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:00.417681    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:03.909234    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:07.395226    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:10.901920    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:14.374872    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:17.851221    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:21.328880    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:24.809421    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:28.320009    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:31.816793    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:35.333832    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:38.824194    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:42.307731    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:45.807089    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:49.305718    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:52.798922    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:56.313890    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:59.836322    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:03.403038    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:06.904873    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:10.479959    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:14.082879    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:17.624185    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:21.136015    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:24.685013    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:28.213734    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:31.714399    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:35.233330    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:38.843325    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:42.402551    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:45.938694    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:49.454870    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:52.986902    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:56.489353    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:00.021135    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:03.507172    8332 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0329 19:18:03.507172    8332 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0329 19:18:03.533496    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	W0329 19:18:04.069962    8332 delete.go:135] deletehost failed: Docker machine "cilium-20220329190230-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 19:18:04.077704    8332 cli_runner.go:133] Run: docker container inspect -f {{.Id}} cilium-20220329190230-1328
	I0329 19:18:04.600810    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:05.090393    8332 cli_runner.go:133] Run: docker exec --privileged -t cilium-20220329190230-1328 /bin/bash -c "sudo init 0"
	W0329 19:18:05.672679    8332 cli_runner.go:180] docker exec --privileged -t cilium-20220329190230-1328 /bin/bash -c "sudo init 0" returned with exit code 1
	I0329 19:18:05.672781    8332 oci.go:656] error shutdown cilium-20220329190230-1328: docker exec --privileged -t cilium-20220329190230-1328 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container c4107637dd3b7afc7bc72f7ccceb100bd4d5d976f19a845635c733b299834bef is not running
	I0329 19:18:06.681210    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:07.181772    8332 oci.go:670] temporary error: container cilium-20220329190230-1328 status is  but expect it to be exited
	I0329 19:18:07.181772    8332 oci.go:676] Successfully shutdown container cilium-20220329190230-1328
	I0329 19:18:07.188781    8332 cli_runner.go:133] Run: docker rm -f -v cilium-20220329190230-1328
	I0329 19:18:16.145751    8332 cli_runner.go:186] Completed: docker rm -f -v cilium-20220329190230-1328: (8.9569192s)
	I0329 19:18:16.153565    8332 cli_runner.go:133] Run: docker container inspect -f {{.Id}} cilium-20220329190230-1328
	W0329 19:18:16.627733    8332 cli_runner.go:180] docker container inspect -f {{.Id}} cilium-20220329190230-1328 returned with exit code 1
	I0329 19:18:16.634737    8332 cli_runner.go:133] Run: docker network inspect cilium-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 19:18:17.107264    8332 cli_runner.go:180] docker network inspect cilium-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:18:17.119693    8332 network_create.go:262] running [docker network inspect cilium-20220329190230-1328] to gather additional debugging logs...
	I0329 19:18:17.119693    8332 cli_runner.go:133] Run: docker network inspect cilium-20220329190230-1328
	W0329 19:18:17.583398    8332 cli_runner.go:180] docker network inspect cilium-20220329190230-1328 returned with exit code 1
	I0329 19:18:17.583398    8332 network_create.go:265] error running [docker network inspect cilium-20220329190230-1328]: docker network inspect cilium-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220329190230-1328
	I0329 19:18:17.583398    8332 network_create.go:267] output of [docker network inspect cilium-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220329190230-1328
	
	** /stderr **
	W0329 19:18:17.584615    8332 delete.go:139] delete failed (probably ok) <nil>
	I0329 19:18:17.584615    8332 fix.go:120] Sleeping 1 second for extra luck!
	I0329 19:18:18.590842    8332 start.go:127] createHost starting for "" (driver="docker")
	I0329 19:18:18.596546    8332 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 19:18:18.596546    8332 start.go:161] libmachine.API.Create for "cilium-20220329190230-1328" (driver="docker")
	I0329 19:18:18.596546    8332 client.go:168] LocalClient.Create starting
	I0329 19:18:18.597645    8332 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0329 19:18:18.597849    8332 main.go:130] libmachine: Decoding PEM data...
	I0329 19:18:18.597932    8332 main.go:130] libmachine: Parsing certificate...
	I0329 19:18:18.598120    8332 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0329 19:18:18.598331    8332 main.go:130] libmachine: Decoding PEM data...
	I0329 19:18:18.598424    8332 main.go:130] libmachine: Parsing certificate...
	I0329 19:18:18.609504    8332 cli_runner.go:133] Run: docker network inspect cilium-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 19:18:19.076239    8332 cli_runner.go:180] docker network inspect cilium-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:18:19.084550    8332 network_create.go:262] running [docker network inspect cilium-20220329190230-1328] to gather additional debugging logs...
	I0329 19:18:19.084550    8332 cli_runner.go:133] Run: docker network inspect cilium-20220329190230-1328
	W0329 19:18:19.541824    8332 cli_runner.go:180] docker network inspect cilium-20220329190230-1328 returned with exit code 1
	I0329 19:18:19.542006    8332 network_create.go:265] error running [docker network inspect cilium-20220329190230-1328]: docker network inspect cilium-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220329190230-1328
	I0329 19:18:19.542006    8332 network_create.go:267] output of [docker network inspect cilium-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220329190230-1328
	
	** /stderr **
	I0329 19:18:19.549704    8332 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 19:18:20.011150    8332 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000614168] amended:false}} dirty:map[] misses:0}
	I0329 19:18:20.011150    8332 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:18:20.011150    8332 network_create.go:114] attempt to create docker network cilium-20220329190230-1328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 19:18:20.019847    8332 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220329190230-1328
	W0329 19:18:20.471587    8332 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220329190230-1328 returned with exit code 1
	W0329 19:18:20.471665    8332 network_create.go:106] failed to create docker network cilium-20220329190230-1328 192.168.49.0/24, will retry: subnet is taken
	I0329 19:18:20.492607    8332 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000614168] amended:false}} dirty:map[] misses:0}
	I0329 19:18:20.492607    8332 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:18:20.508171    8332 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000614168] amended:true}} dirty:map[192.168.49.0:0xc000614168 192.168.58.0:0xc000bc2398] misses:0}
	I0329 19:18:20.508171    8332 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:18:20.508171    8332 network_create.go:114] attempt to create docker network cilium-20220329190230-1328 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0329 19:18:20.516904    8332 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220329190230-1328
	I0329 19:18:21.124271    8332 network_create.go:98] docker network cilium-20220329190230-1328 192.168.58.0/24 created
	I0329 19:18:21.124915    8332 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20220329190230-1328" container
	I0329 19:18:21.140451    8332 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 19:18:21.596633    8332 cli_runner.go:133] Run: docker volume create cilium-20220329190230-1328 --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true
	I0329 19:18:22.028222    8332 oci.go:102] Successfully created a docker volume cilium-20220329190230-1328
	I0329 19:18:22.036356    8332 cli_runner.go:133] Run: docker run --rm --name cilium-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --entrypoint /usr/bin/test -v cilium-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 19:18:25.709447    8332 cli_runner.go:186] Completed: docker run --rm --name cilium-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --entrypoint /usr/bin/test -v cilium-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (3.6730707s)
	I0329 19:18:25.709447    8332 oci.go:106] Successfully prepared a docker volume cilium-20220329190230-1328
	I0329 19:18:25.709447    8332 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:18:25.709447    8332 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 19:18:25.716493    8332 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 19:19:14.723792    8332 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (49.0070193s)
	I0329 19:19:14.723792    8332 kic.go:188] duration metric: took 49.014065 seconds to extract preloaded images to volume
	I0329 19:19:14.731786    8332 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:19:15.431510    8332 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:46 SystemTime:2022-03-29 19:19:15.0545074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:19:15.440710    8332 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 19:19:16.136168    8332 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.58.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 19:19:18.620607    8332 cli_runner.go:186] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220329190230-1328 --name cilium-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220329190230-1328 --network cilium-20220329190230-1328 --ip 192.168.58.2 --volume cilium-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: (2.4838713s)
	I0329 19:19:18.631599    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Running}}
	I0329 19:19:19.157414    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:19:19.685402    8332 cli_runner.go:133] Run: docker exec cilium-20220329190230-1328 stat /var/lib/dpkg/alternatives/iptables
	I0329 19:19:20.628970    8332 oci.go:278] the created container "cilium-20220329190230-1328" has a running status.
	I0329 19:19:20.628970    8332 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa...
	I0329 19:19:20.812276    8332 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 19:19:21.444349    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:19:21.914512    8332 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 19:19:21.914567    8332 kic_runner.go:114] Args: [docker exec --privileged cilium-20220329190230-1328 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 19:19:22.808600    8332 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa...
	I0329 19:19:23.342913    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:19:23.830758    8332 machine.go:88] provisioning docker machine ...
	I0329 19:19:23.830866    8332 ubuntu.go:169] provisioning hostname "cilium-20220329190230-1328"
	I0329 19:19:23.845256    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:24.365320    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:24.371889    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:24.371889    8332 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20220329190230-1328 && echo "cilium-20220329190230-1328" | sudo tee /etc/hostname
	I0329 19:19:24.614965    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20220329190230-1328
	
	I0329 19:19:24.625245    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:25.139008    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:25.139525    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:25.139619    8332 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220329190230-1328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220329190230-1328/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220329190230-1328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 19:19:25.343527    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 19:19:25.343527    8332 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0329 19:19:25.343527    8332 ubuntu.go:177] setting up certificates
	I0329 19:19:25.343527    8332 provision.go:83] configureAuth start
	I0329 19:19:25.353673    8332 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220329190230-1328
	I0329 19:19:25.871901    8332 provision.go:138] copyHostCerts
	I0329 19:19:25.871901    8332 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0329 19:19:25.871901    8332 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0329 19:19:25.871901    8332 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0329 19:19:25.873140    8332 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0329 19:19:25.873140    8332 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0329 19:19:25.873904    8332 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0329 19:19:25.874942    8332 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0329 19:19:25.874942    8332 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0329 19:19:25.874942    8332 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0329 19:19:25.876182    8332 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220329190230-1328 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220329190230-1328]
	I0329 19:19:26.026809    8332 provision.go:172] copyRemoteCerts
	I0329 19:19:26.037349    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 19:19:26.047077    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:26.543365    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:26.687346    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 19:19:26.752287    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0329 19:19:26.808001    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 19:19:26.868657    8332 provision.go:86] duration metric: configureAuth took 1.5251217s
	I0329 19:19:26.868657    8332 ubuntu.go:193] setting minikube options for container-runtime
	I0329 19:19:26.869912    8332 config.go:176] Loaded profile config "cilium-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:19:26.881179    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:27.434652    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:27.434652    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:27.434652    8332 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 19:19:27.654714    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 19:19:27.655706    8332 ubuntu.go:71] root file system type: overlay
	I0329 19:19:27.655706    8332 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 19:19:27.663716    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:28.153787    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:28.153787    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:28.154569    8332 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 19:19:28.391952    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 19:19:28.403494    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:28.907052    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:28.907459    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:28.907459    8332 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 19:19:34.193728    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 19:19:28.342870100 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 19:19:34.193728    8332 machine.go:91] provisioned docker machine in 10.3628582s
	I0329 19:19:34.193728    8332 client.go:171] LocalClient.Create took 1m15.5961307s
	I0329 19:19:34.193728    8332 start.go:169] duration metric: libmachine.API.Create for "cilium-20220329190230-1328" took 1m15.5967512s
	I0329 19:19:34.193728    8332 start.go:302] post-start starting for "cilium-20220329190230-1328" (driver="docker")
	I0329 19:19:34.193728    8332 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 19:19:34.215864    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 19:19:34.227648    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:34.730662    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:34.894630    8332 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 19:19:34.910461    8332 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 19:19:34.910544    8332 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 19:19:34.910544    8332 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 19:19:34.910544    8332 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 19:19:34.910587    8332 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0329 19:19:34.910899    8332 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0329 19:19:34.911723    8332 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem -> 13282.pem in /etc/ssl/certs
	I0329 19:19:34.924241    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 19:19:34.951665    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /etc/ssl/certs/13282.pem (1708 bytes)
	I0329 19:19:35.011396    8332 start.go:305] post-start completed in 817.6627ms
	I0329 19:19:35.030197    8332 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220329190230-1328
	I0329 19:19:35.511234    8332 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\config.json ...
	I0329 19:19:35.525978    8332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 19:19:35.534982    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:36.011105    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:36.184048    8332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 19:19:36.197845    8332 start.go:130] duration metric: createHost completed in 1m17.6065605s
	I0329 19:19:36.220051    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	W0329 19:19:36.740805    8332 fix.go:134] unexpected machine state, will restart: <nil>
	I0329 19:19:36.740934    8332 machine.go:88] provisioning docker machine ...
	I0329 19:19:36.740934    8332 ubuntu.go:169] provisioning hostname "cilium-20220329190230-1328"
	I0329 19:19:36.749899    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:37.230689    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:37.231857    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:37.231910    8332 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20220329190230-1328 && echo "cilium-20220329190230-1328" | sudo tee /etc/hostname
	I0329 19:19:37.461908    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20220329190230-1328
	
	I0329 19:19:37.470917    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:37.970356    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:37.971351    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:37.971351    8332 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220329190230-1328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220329190230-1328/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220329190230-1328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 19:19:38.184901    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 19:19:38.184901    8332 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0329 19:19:38.184901    8332 ubuntu.go:177] setting up certificates
	I0329 19:19:38.184901    8332 provision.go:83] configureAuth start
	I0329 19:19:38.194406    8332 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220329190230-1328
	I0329 19:19:38.679857    8332 provision.go:138] copyHostCerts
	I0329 19:19:38.680147    8332 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0329 19:19:38.680245    8332 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0329 19:19:38.680834    8332 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0329 19:19:38.681233    8332 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0329 19:19:38.681988    8332 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0329 19:19:38.682405    8332 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0329 19:19:38.683928    8332 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0329 19:19:38.683986    8332 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0329 19:19:38.684445    8332 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0329 19:19:38.685549    8332 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220329190230-1328 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220329190230-1328]
	I0329 19:19:38.868878    8332 provision.go:172] copyRemoteCerts
	I0329 19:19:38.879304    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 19:19:38.887305    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:39.357570    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:39.527199    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 19:19:39.598034    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0329 19:19:39.679743    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 19:19:39.748285    8332 provision.go:86] duration metric: configureAuth took 1.5633748s
	I0329 19:19:39.748285    8332 ubuntu.go:193] setting minikube options for container-runtime
	I0329 19:19:39.748285    8332 config.go:176] Loaded profile config "cilium-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:19:39.756277    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:40.230826    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:40.231783    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:40.231783    8332 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 19:19:40.438547    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 19:19:40.438547    8332 ubuntu.go:71] root file system type: overlay
	I0329 19:19:40.438547    8332 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 19:19:40.448188    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:40.949460    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:40.949524    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:40.949524    8332 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 19:19:41.201611    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 19:19:41.211128    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:41.731813    8332 main.go:130] libmachine: Using SSH client type: native
	I0329 19:19:41.732381    8332 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57409 <nil> <nil>}
	I0329 19:19:41.732431    8332 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 19:19:41.920488    8332 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 19:19:41.920488    8332 machine.go:91] provisioned docker machine in 5.1795251s
	I0329 19:19:41.920488    8332 start.go:302] post-start starting for "cilium-20220329190230-1328" (driver="docker")
	I0329 19:19:41.920488    8332 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 19:19:41.930493    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 19:19:41.937489    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:42.441022    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:42.604656    8332 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 19:19:42.619574    8332 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 19:19:42.619574    8332 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 19:19:42.619574    8332 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 19:19:42.619574    8332 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 19:19:42.619574    8332 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0329 19:19:42.620472    8332 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0329 19:19:42.620472    8332 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem -> 13282.pem in /etc/ssl/certs
	I0329 19:19:42.631465    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 19:19:42.655072    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /etc/ssl/certs/13282.pem (1708 bytes)
	I0329 19:19:42.715779    8332 start.go:305] post-start completed in 795.2861ms
	I0329 19:19:42.734219    8332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 19:19:42.745729    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:43.243654    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:43.403948    8332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 19:19:43.422014    8332 fix.go:57] fixHost completed within 5m15.9179961s
	I0329 19:19:43.422014    8332 start.go:81] releasing machines lock for "cilium-20220329190230-1328", held for 5m15.9182923s
	I0329 19:19:43.435710    8332 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220329190230-1328
	I0329 19:19:43.914492    8332 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 19:19:43.921495    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:43.922503    8332 ssh_runner.go:195] Run: sudo service containerd status
	I0329 19:19:43.929487    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:44.410858    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:44.432940    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:19:44.657384    8332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 19:19:44.693232    8332 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 19:19:44.705229    8332 ssh_runner.go:195] Run: sudo service crio status
	I0329 19:19:44.756254    8332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 19:19:44.821443    8332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 19:19:44.856962    8332 ssh_runner.go:195] Run: sudo service docker status
	I0329 19:19:44.916464    8332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 19:19:45.046926    8332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 19:19:45.189833    8332 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 19:19:45.198815    8332 cli_runner.go:133] Run: docker exec -t cilium-20220329190230-1328 dig +short host.docker.internal
	I0329 19:19:46.108107    8332 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0329 19:19:46.118589    8332 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0329 19:19:46.135863    8332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 19:19:46.174564    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:19:46.665450    8332 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:19:46.674281    8332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 19:19:46.772235    8332 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 19:19:46.772235    8332 docker.go:537] Images already preloaded, skipping extraction
	I0329 19:19:46.781327    8332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 19:19:46.870308    8332 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 19:19:46.870308    8332 cache_images.go:84] Images are preloaded, skipping loading
	I0329 19:19:46.879631    8332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 19:19:47.078123    8332 cni.go:93] Creating CNI manager for "cilium"
	I0329 19:19:47.078123    8332 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 19:19:47.078123    8332 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20220329190230-1328 NodeName:cilium-20220329190230-1328 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 19:19:47.078123    8332 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cilium-20220329190230-1328"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 19:19:47.079338    8332 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20220329190230-1328 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:cilium-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0329 19:19:47.092655    8332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 19:19:47.117803    8332 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 19:19:47.129545    8332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0329 19:19:47.179229    8332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0329 19:19:47.218651    8332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 19:19:47.256827    8332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0329 19:19:47.311934    8332 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0329 19:19:47.358584    8332 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0329 19:19:47.416492    8332 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0329 19:19:47.432749    8332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 19:19:47.469404    8332 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328 for IP: 192.168.58.2
	I0329 19:19:47.470247    8332 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0329 19:19:47.470563    8332 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0329 19:19:47.471126    8332 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\client.key
	I0329 19:19:47.471324    8332 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\client.crt with IP's: []
	I0329 19:19:47.631378    8332 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\client.crt ...
	I0329 19:19:47.631910    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\client.crt: {Name:mk0121c5054c22ee4ecac5c346da90ec9272c3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:19:47.633197    8332 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\client.key ...
	I0329 19:19:47.633197    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\client.key: {Name:mk765f600c6b023aaf11e219135fd4e84de92523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:19:47.633525    8332 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.key.cee25041
	I0329 19:19:47.634659    8332 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 19:19:47.988896    8332 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.crt.cee25041 ...
	I0329 19:19:47.988896    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.crt.cee25041: {Name:mk788b149cce69a004a1385b9d183c6c15b8ee4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:19:47.989893    8332 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.key.cee25041 ...
	I0329 19:19:47.989893    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.key.cee25041: {Name:mk4517a8db3884964bec0b0088f5b3298611bcf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:19:47.990897    8332 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.crt.cee25041 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.crt
	I0329 19:19:47.995895    8332 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.key.cee25041 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.key
	I0329 19:19:47.996896    8332 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.key
	I0329 19:19:47.996896    8332 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.crt with IP's: []
	I0329 19:19:48.161611    8332 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.crt ...
	I0329 19:19:48.161611    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.crt: {Name:mkbb9141e2da849b08a0859b7578ce368ae41e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:19:48.162602    8332 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.key ...
	I0329 19:19:48.162602    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.key: {Name:mk3551a9c4fc8db3f99dbd8ad4754b2f685dae0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:19:48.171689    8332 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem (1338 bytes)
	W0329 19:19:48.172007    8332 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328_empty.pem, impossibly tiny 0 bytes
	I0329 19:19:48.172007    8332 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0329 19:19:48.172007    8332 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0329 19:19:48.172007    8332 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0329 19:19:48.172693    8332 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0329 19:19:48.172693    8332 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem (1708 bytes)
	I0329 19:19:48.174378    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 19:19:48.250098    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 19:19:48.316727    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 19:19:48.374963    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\cilium-20220329190230-1328\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0329 19:19:48.440191    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 19:19:48.509191    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0329 19:19:48.591166    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 19:19:48.650365    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0329 19:19:48.696373    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem --> /usr/share/ca-certificates/1328.pem (1338 bytes)
	I0329 19:19:48.755818    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /usr/share/ca-certificates/13282.pem (1708 bytes)
	I0329 19:19:48.810590    8332 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 19:19:48.855157    8332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 19:19:48.899610    8332 ssh_runner.go:195] Run: openssl version
	I0329 19:19:48.921603    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1328.pem && ln -fs /usr/share/ca-certificates/1328.pem /etc/ssl/certs/1328.pem"
	I0329 19:19:48.954211    8332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1328.pem
	I0329 19:19:48.968001    8332 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:29 /usr/share/ca-certificates/1328.pem
	I0329 19:19:48.977291    8332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1328.pem
	I0329 19:19:49.002311    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1328.pem /etc/ssl/certs/51391683.0"
	I0329 19:19:49.035776    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13282.pem && ln -fs /usr/share/ca-certificates/13282.pem /etc/ssl/certs/13282.pem"
	I0329 19:19:49.075770    8332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13282.pem
	I0329 19:19:49.086573    8332 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:29 /usr/share/ca-certificates/13282.pem
	I0329 19:19:49.095573    8332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13282.pem
	I0329 19:19:49.121573    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13282.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 19:19:49.152649    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 19:19:49.180652    8332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:19:49.190646    8332 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:18 /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:19:49.200652    8332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:19:49.222641    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 19:19:49.244509    8332 kubeadm.go:391] StartCluster: {Name:cilium-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cilium-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false}
	I0329 19:19:49.251645    8332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 19:19:49.344543    8332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 19:19:49.393950    8332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 19:19:49.420738    8332 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 19:19:49.432016    8332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 19:19:49.464664    8332 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 19:19:49.464664    8332 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 19:20:21.086354    8332 out.go:203]   - Generating certificates and keys ...
	I0329 19:20:21.100608    8332 out.go:203]   - Booting up control plane ...
	I0329 19:20:21.110412    8332 out.go:203]   - Configuring RBAC rules ...
	I0329 19:20:21.116074    8332 cni.go:93] Creating CNI manager for "cilium"
	I0329 19:20:21.124086    8332 out.go:176] * Configuring Cilium (Container Networking Interface) ...
	I0329 19:20:21.133734    8332 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0329 19:20:21.376507    8332 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0329 19:20:21.376507    8332 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0329 19:20:21.377507    8332 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0329 19:20:21.377507    8332 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 19:20:21.377507    8332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0329 19:20:21.602086    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 19:20:26.380787    8332 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (4.778674s)
	I0329 19:20:26.380787    8332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 19:20:26.394786    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=cilium-20220329190230-1328 minikube.k8s.io/updated_at=2022_03_29T19_20_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:26.397801    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:26.399785    8332 ops.go:34] apiserver oom_adj: -16
	I0329 19:20:26.725314    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:27.392175    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:27.885735    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:28.387733    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:28.889357    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:29.390832    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:29.904209    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:30.396046    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:30.889182    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:31.392712    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:31.893065    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:32.399978    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:32.888269    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:33.402293    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:34.593699    8332 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.1913993s)
	I0329 19:20:34.882776    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:36.243290    8332 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.3605069s)
	I0329 19:20:36.383584    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:39.570068    8332 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.1863556s)
	I0329 19:20:39.898035    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:43.192418    8332 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.2943642s)
	I0329 19:20:43.387804    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:43.898953    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:47.469231    8332 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.5702582s)
	I0329 19:20:47.902352    8332 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:20:48.584049    8332 kubeadm.go:1020] duration metric: took 22.2031354s to wait for elevateKubeSystemPrivileges.
	I0329 19:20:48.584049    8332 kubeadm.go:393] StartCluster complete in 59.3392021s
	I0329 19:20:48.584049    8332 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:48.584049    8332 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:20:48.588281    8332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:49.216228    8332 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220329190230-1328" rescaled to 1
	I0329 19:20:49.216228    8332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 19:20:49.216228    8332 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:20:49.224218    8332 out.go:176] * Verifying Kubernetes components...
	I0329 19:20:49.216228    8332 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 19:20:49.216228    8332 config.go:176] Loaded profile config "cilium-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:20:49.224218    8332 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220329190230-1328"
	I0329 19:20:49.224218    8332 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220329190230-1328"
	W0329 19:20:49.224218    8332 addons.go:165] addon storage-provisioner should already be in state true
	I0329 19:20:49.224218    8332 addons.go:65] Setting default-storageclass=true in profile "cilium-20220329190230-1328"
	I0329 19:20:49.224218    8332 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220329190230-1328"
	I0329 19:20:49.224218    8332 host.go:66] Checking if "cilium-20220329190230-1328" exists ...
	I0329 19:20:49.237210    8332 ssh_runner.go:195] Run: sudo service kubelet status
	I0329 19:20:49.243209    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:20:49.243209    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:20:49.855660    8332 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 19:20:49.855935    8332 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:20:49.855984    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 19:20:49.871434    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:20:49.891903    8332 addons.go:153] Setting addon default-storageclass=true in "cilium-20220329190230-1328"
	W0329 19:20:49.891903    8332 addons.go:165] addon default-storageclass should already be in state true
	I0329 19:20:49.892047    8332 host.go:66] Checking if "cilium-20220329190230-1328" exists ...
	I0329 19:20:49.914918    8332 cli_runner.go:133] Run: docker container inspect cilium-20220329190230-1328 --format={{.State.Status}}
	I0329 19:20:49.978132    8332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 19:20:50.000232    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:20:50.453364    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:50.533370    8332 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 19:20:50.533370    8332 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 19:20:50.543352    8332 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220329190230-1328
	I0329 19:20:50.594704    8332 node_ready.go:35] waiting up to 5m0s for node "cilium-20220329190230-1328" to be "Ready" ...
	I0329 19:20:50.674695    8332 node_ready.go:49] node "cilium-20220329190230-1328" has status "Ready":"True"
	I0329 19:20:50.674695    8332 node_ready.go:38] duration metric: took 79.9897ms waiting for node "cilium-20220329190230-1328" to be "Ready" ...
	I0329 19:20:50.674695    8332 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 19:20:50.701705    8332 pod_ready.go:78] waiting up to 5m0s for pod "cilium-h9rtv" in "kube-system" namespace to be "Ready" ...
	I0329 19:20:51.113113    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:20:51.134115    8332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57409 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\cilium-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:51.187854    8332 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.2097153s)
	I0329 19:20:51.187854    8332 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0329 19:20:51.819392    8332 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 19:20:53.001775    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:20:53.179999    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3605997s)
	I0329 19:20:53.179999    8332 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.066874s)
	I0329 19:20:53.185017    8332 out.go:176] * Enabled addons: default-storageclass, storage-provisioner
	I0329 19:20:53.185017    8332 addons.go:417] enableAddons completed in 3.9687656s
	I0329 19:20:55.328478    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:20:57.471583    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:20:59.484757    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:01.489640    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:03.884956    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:05.965780    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:08.373159    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:10.378846    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:12.684890    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:14.887205    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:17.319902    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:19.376275    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:21.390734    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:23.884578    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:26.385832    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:28.815953    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:30.837483    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:33.382450    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:35.813152    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:37.930361    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:40.575040    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:42.836906    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:47.926827    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:53.775674    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:56.168143    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:58.308282    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:00.317919    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:02.819857    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:05.325327    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:07.824415    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:10.313321    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:12.316811    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:14.394315    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:17.116058    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:19.312826    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:21.748883    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:23.885890    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:26.324064    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:30.369642    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:32.818984    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:35.947036    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:38.318447    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:46.775026    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:48.840678    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:51.313482    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:53.320212    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:55.871773    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:58.374680    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:00.812551    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:02.824265    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:05.314739    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:07.329195    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:09.820195    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:11.866048    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:14.317171    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:16.328696    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:18.813392    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:21.317076    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:23.672398    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:25.819577    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:28.326977    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:30.823383    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:33.321155    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:35.815437    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:37.816578    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:40.331022    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:42.825237    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:45.312779    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:47.320766    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:49.813537    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:51.818289    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:53.828440    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:56.370171    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:58.821317    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:00.822044    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:03.330710    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:05.819020    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:07.824231    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:10.316167    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:12.826496    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:14.828679    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:17.321009    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:19.811682    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:21.816907    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:23.820111    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:26.319136    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:28.814680    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:30.816465    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:32.827223    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:35.327667    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:37.358572    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:39.812052    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:42.329882    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:44.822526    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:47.319094    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:49.322335    8332 pod_ready.go:102] pod "cilium-h9rtv" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:50.831728    8332 pod_ready.go:81] duration metric: took 4m0.1276387s waiting for pod "cilium-h9rtv" in "kube-system" namespace to be "Ready" ...
	E0329 19:24:50.831728    8332 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 19:24:50.831728    8332 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-78f49c47f-p5hs9" in "kube-system" namespace to be "Ready" ...
	I0329 19:24:50.843716    8332 pod_ready.go:92] pod "cilium-operator-78f49c47f-p5hs9" in "kube-system" namespace has status "Ready":"True"
	I0329 19:24:50.843716    8332 pod_ready.go:81] duration metric: took 11.9887ms waiting for pod "cilium-operator-78f49c47f-p5hs9" in "kube-system" namespace to be "Ready" ...
	I0329 19:24:50.843716    8332 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-kghld" in "kube-system" namespace to be "Ready" ...
	I0329 19:24:50.849714    8332 pod_ready.go:97] error getting pod "coredns-64897985d-kghld" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kghld" not found
	I0329 19:24:50.849714    8332 pod_ready.go:81] duration metric: took 5.9979ms waiting for pod "coredns-64897985d-kghld" in "kube-system" namespace to be "Ready" ...
	E0329 19:24:50.849714    8332 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-kghld" in "kube-system" namespace (skipping!): pods "coredns-64897985d-kghld" not found
	I0329 19:24:50.849714    8332 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-shk4t" in "kube-system" namespace to be "Ready" ...
	I0329 19:24:52.891456    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:54.906220    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:57.389630    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:59.394407    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:01.403532    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:03.895589    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:06.404765    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:08.888399    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:10.897500    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:13.388814    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:15.395186    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:17.396877    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:19.398615    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:21.888337    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:23.903676    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:26.417815    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:28.891833    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:30.894000    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:32.895550    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:35.401718    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:37.885470    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:39.889007    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:41.895203    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:44.386974    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:46.395910    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:48.892409    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:50.899341    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:53.396269    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:55.894351    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:57.899121    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:00.390699    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:02.396817    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:04.397426    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:06.897834    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:09.392250    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:11.894620    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:14.387627    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:16.401382    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:18.886655    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:20.899149    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:23.387137    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:25.387678    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:27.390403    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:29.410559    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:31.413827    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:33.893528    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:36.388299    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:38.390305    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:40.895399    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:43.393950    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:45.889062    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:47.889573    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:49.894019    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:52.391170    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:54.394405    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:56.890590    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:58.902909    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:01.388511    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:03.396915    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:05.888955    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:07.897710    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:10.401371    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:12.889683    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:14.899941    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:17.393874    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:19.901065    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:22.400204    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:24.894912    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:26.898572    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:29.395867    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:31.400568    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:33.885991    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:35.896873    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:37.898656    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:39.901336    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:42.391948    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:44.398401    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:46.398867    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:48.887265    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:50.899523    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:53.389626    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:55.890826    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:58.390965    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:00.393211    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:02.910203    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:05.397763    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:07.883638    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:10.394668    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:12.400708    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:14.888581    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:16.890344    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:19.120641    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:30.886454    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:32.889057    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:34.895083    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:36.900754    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:39.396248    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:41.397512    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:43.401627    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:45.885448    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:48.114392    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:50.205361    8332 pod_ready.go:102] pod "coredns-64897985d-shk4t" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:50.886534    8332 pod_ready.go:81] duration metric: took 4m0.0355002s waiting for pod "coredns-64897985d-shk4t" in "kube-system" namespace to be "Ready" ...
	E0329 19:28:50.886534    8332 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 19:28:50.886534    8332 pod_ready.go:38] duration metric: took 8m0.2091457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 19:28:50.900527    8332 out.go:176] 
	W0329 19:28:50.900527    8332 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0329 19:28:50.900527    8332 out.go:241] * 
	* 
	W0329 19:28:50.901839    8332 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0329 19:28:50.908858    8332 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (931.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (915.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker
E0329 19:14:16.210241    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (15m14.9982648s)

                                                
                                                
-- stdout --
	* [calico-20220329190230-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20220329190230-1328 in cluster calico-20220329190230-1328
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20220329190230-1328" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 19:14:16.135645    8480 out.go:297] Setting OutFile to fd 1896 ...
	I0329 19:14:16.195884    8480 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:14:16.195884    8480 out.go:310] Setting ErrFile to fd 1876...
	I0329 19:14:16.195884    8480 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:14:16.206694    8480 out.go:304] Setting JSON to false
	I0329 19:14:16.213021    8480 start.go:114] hostinfo: {"hostname":"minikube8","uptime":8452,"bootTime":1648572804,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 19:14:16.213219    8480 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 19:14:16.223508    8480 out.go:176] * [calico-20220329190230-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 19:14:16.224089    8480 notify.go:193] Checking for updates...
	I0329 19:14:16.234525    8480 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:14:16.239159    8480 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 19:14:16.242319    8480 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 19:14:16.247875    8480 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 19:14:16.249512    8480 config.go:176] Loaded profile config "cert-expiration-20220329190729-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:14:16.249776    8480 config.go:176] Loaded profile config "cilium-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:14:16.249776    8480 config.go:176] Loaded profile config "force-systemd-env-20220329190726-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:14:16.249776    8480 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 19:14:18.255327    8480 docker.go:137] docker version: linux-20.10.13
	I0329 19:14:18.263706    8480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:14:18.946489    8480 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:57 OomKillDisable:true NGoroutines:50 SystemTime:2022-03-29 19:14:18.5842441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:14:18.955029    8480 out.go:176] * Using the docker driver based on user configuration
	I0329 19:14:18.955622    8480 start.go:283] selected driver: docker
	I0329 19:14:18.955622    8480 start.go:800] validating driver "docker" against <nil>
	I0329 19:14:18.955622    8480 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 19:14:19.020585    8480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:14:19.720355    8480 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:57 OomKillDisable:true NGoroutines:50 SystemTime:2022-03-29 19:14:19.3522593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:14:19.720355    8480 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 19:14:19.722578    8480 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 19:14:19.722709    8480 cni.go:93] Creating CNI manager for "calico"
	I0329 19:14:19.722783    8480 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0329 19:14:19.722818    8480 start_flags.go:306] config:
	{Name:calico-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 19:14:19.729294    8480 out.go:176] * Starting control plane node calico-20220329190230-1328 in cluster calico-20220329190230-1328
	I0329 19:14:19.729409    8480 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 19:14:19.732344    8480 out.go:176] * Pulling base image ...
	I0329 19:14:19.732344    8480 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:14:19.732913    8480 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 19:14:19.733051    8480 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 19:14:19.733051    8480 cache.go:57] Caching tarball of preloaded images
	I0329 19:14:19.733051    8480 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 19:14:19.733689    8480 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 19:14:19.733689    8480 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\config.json ...
	I0329 19:14:19.733689    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\config.json: {Name:mk25246736a70d9f743ef4b1a026d51cc107a291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:14:20.224956    8480 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 19:14:20.224956    8480 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 19:14:20.224956    8480 cache.go:208] Successfully downloaded all kic artifacts
	I0329 19:14:20.224956    8480 start.go:348] acquiring machines lock for calico-20220329190230-1328: {Name:mk39f9534d29d4409938b94bbbcd503e1ee16b71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 19:14:20.224956    8480 start.go:352] acquired machines lock for "calico-20220329190230-1328" in 0s
	I0329 19:14:20.224956    8480 start.go:90] Provisioning new machine with config: &{Name:calico-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220329190230-1328 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:14:20.224956    8480 start.go:127] createHost starting for "" (driver="docker")
	I0329 19:14:20.229922    8480 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 19:14:20.229922    8480 start.go:161] libmachine.API.Create for "calico-20220329190230-1328" (driver="docker")
	I0329 19:14:20.229922    8480 client.go:168] LocalClient.Create starting
	I0329 19:14:20.230920    8480 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0329 19:14:20.230920    8480 main.go:130] libmachine: Decoding PEM data...
	I0329 19:14:20.230920    8480 main.go:130] libmachine: Parsing certificate...
	I0329 19:14:20.230920    8480 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0329 19:14:20.230920    8480 main.go:130] libmachine: Decoding PEM data...
	I0329 19:14:20.230920    8480 main.go:130] libmachine: Parsing certificate...
	I0329 19:14:20.240916    8480 cli_runner.go:133] Run: docker network inspect calico-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 19:14:20.702577    8480 cli_runner.go:180] docker network inspect calico-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:14:20.710657    8480 network_create.go:262] running [docker network inspect calico-20220329190230-1328] to gather additional debugging logs...
	I0329 19:14:20.710657    8480 cli_runner.go:133] Run: docker network inspect calico-20220329190230-1328
	W0329 19:14:21.179560    8480 cli_runner.go:180] docker network inspect calico-20220329190230-1328 returned with exit code 1
	I0329 19:14:21.179560    8480 network_create.go:265] error running [docker network inspect calico-20220329190230-1328]: docker network inspect calico-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220329190230-1328
	I0329 19:14:21.179560    8480 network_create.go:267] output of [docker network inspect calico-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220329190230-1328
	
	** /stderr **
	I0329 19:14:21.186564    8480 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 19:14:21.681148    8480 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000388160] misses:0}
	I0329 19:14:21.681148    8480 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:14:21.681148    8480 network_create.go:114] attempt to create docker network calico-20220329190230-1328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 19:14:21.688399    8480 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220329190230-1328
	I0329 19:14:22.277573    8480 network_create.go:98] docker network calico-20220329190230-1328 192.168.49.0/24 created
	I0329 19:14:22.277573    8480 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220329190230-1328" container
	I0329 19:14:22.302513    8480 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 19:14:22.786140    8480 cli_runner.go:133] Run: docker volume create calico-20220329190230-1328 --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true
	I0329 19:14:23.288790    8480 oci.go:102] Successfully created a docker volume calico-20220329190230-1328
	I0329 19:14:23.298399    8480 cli_runner.go:133] Run: docker run --rm --name calico-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --entrypoint /usr/bin/test -v calico-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 19:14:25.893280    8480 cli_runner.go:186] Completed: docker run --rm --name calico-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --entrypoint /usr/bin/test -v calico-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (2.5947175s)
	I0329 19:14:25.893280    8480 oci.go:106] Successfully prepared a docker volume calico-20220329190230-1328
	I0329 19:14:25.893280    8480 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:14:25.893280    8480 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 19:14:25.904202    8480 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 19:14:55.575512    8480 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (29.6709592s)
	I0329 19:14:55.575512    8480 kic.go:188] duration metric: took 29.682057 seconds to extract preloaded images to volume
	I0329 19:14:55.583980    8480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:14:56.287635    8480 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:54 OomKillDisable:true NGoroutines:47 SystemTime:2022-03-29 19:14:55.9309625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:14:56.296079    8480 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 19:14:57.050413    8480 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.49.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	W0329 19:15:07.417362    8480 cli_runner.go:180] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.49.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 returned with exit code 125
	I0329 19:15:07.417362    8480 cli_runner.go:186] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.49.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: (10.3668895s)
	I0329 19:15:07.417362    8480 client.go:171] LocalClient.Create took 47.1871641s
	I0329 19:15:09.442459    8480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 19:15:09.450458    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	W0329 19:15:09.912779    8480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328 returned with exit code 1
	I0329 19:15:09.912981    8480 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:15:10.203255    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	W0329 19:15:10.655414    8480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328 returned with exit code 1
	I0329 19:15:10.655540    8480 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:15:11.210799    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	W0329 19:15:11.662014    8480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328 returned with exit code 1
	W0329 19:15:11.662014    8480 start.go:277] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0329 19:15:11.662014    8480 start.go:244] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:15:11.671019    8480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 19:15:11.678021    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	W0329 19:15:12.151324    8480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328 returned with exit code 1
	I0329 19:15:12.151324    8480 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:15:12.394155    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	W0329 19:15:12.829118    8480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328 returned with exit code 1
	I0329 19:15:12.829118    8480 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:15:13.194839    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	W0329 19:15:13.682069    8480 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328 returned with exit code 1
	W0329 19:15:13.682417    8480 start.go:292] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0329 19:15:13.682417    8480 start.go:249] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0329 19:15:13.682537    8480 start.go:130] duration metric: createHost completed in 53.4572681s
	I0329 19:15:13.682537    8480 start.go:81] releasing machines lock for "calico-20220329190230-1328", held for 53.4572681s
	W0329 19:15:13.682685    8480 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.49.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 1
25
	stdout:
	3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359
	
	stderr:
	docker: Error response from daemon: network calico-20220329190230-1328 not found.
	I0329 19:15:13.700634    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	W0329 19:15:14.193204    8480 start.go:575] delete host: Docker machine "calico-20220329190230-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0329 19:15:14.193729    8480 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.49.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347
b5: exit status 125
	stdout:
	3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359
	
	stderr:
	docker: Error response from daemon: network calico-20220329190230-1328 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.49.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359
	
	stderr:
	docker: Error response from daemon: network calico-20220329190230-1328 not found.
	
	I0329 19:15:14.193889    8480 start.go:585] Will try again in 5 seconds ...
	I0329 19:15:19.201913    8480 start.go:348] acquiring machines lock for calico-20220329190230-1328: {Name:mk39f9534d29d4409938b94bbbcd503e1ee16b71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 19:15:19.202115    8480 start.go:352] acquired machines lock for "calico-20220329190230-1328" in 202.7µs
	I0329 19:15:19.202235    8480 start.go:94] Skipping create...Using existing machine configuration
	I0329 19:15:19.202235    8480 fix.go:55] fixHost starting: 
	I0329 19:15:19.216327    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:19.670866    8480 fix.go:108] recreateIfNeeded on calico-20220329190230-1328: state= err=<nil>
	I0329 19:15:19.671112    8480 fix.go:113] machineExists: false. err=machine does not exist
	I0329 19:15:19.674976    8480 out.go:176] * docker "calico-20220329190230-1328" container is missing, will recreate.
	I0329 19:15:19.674976    8480 delete.go:124] DEMOLISHING calico-20220329190230-1328 ...
	I0329 19:15:19.690165    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:20.129480    8480 stop.go:79] host is in state 
	I0329 19:15:20.129480    8480 main.go:130] libmachine: Stopping "calico-20220329190230-1328"...
	I0329 19:15:20.144238    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:20.616758    8480 kic_runner.go:93] Run: systemctl --version
	I0329 19:15:20.616758    8480 kic_runner.go:114] Args: [docker exec --privileged calico-20220329190230-1328 systemctl --version]
	I0329 19:15:21.229878    8480 kic_runner.go:93] Run: sudo service kubelet stop
	I0329 19:15:21.229878    8480 kic_runner.go:114] Args: [docker exec --privileged calico-20220329190230-1328 sudo service kubelet stop]
	I0329 19:15:21.824499    8480 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359 is not running
	
	** /stderr **
	W0329 19:15:21.824665    8480 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359 is not running
	I0329 19:15:21.859487    8480 kic_runner.go:93] Run: sudo service kubelet stop
	I0329 19:15:21.859487    8480 kic_runner.go:114] Args: [docker exec --privileged calico-20220329190230-1328 sudo service kubelet stop]
	I0329 19:15:22.423051    8480 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359 is not running
	
	** /stderr **
	W0329 19:15:22.423051    8480 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359 is not running
	I0329 19:15:22.438119    8480 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0329 19:15:22.438196    8480 kic_runner.go:114] Args: [docker exec --privileged calico-20220329190230-1328 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0329 19:15:22.993554    8480 kic.go:456] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359 is not running
	I0329 19:15:22.993630    8480 kic.go:466] successfully stopped kubernetes!
	I0329 19:15:23.012721    8480 kic_runner.go:93] Run: pgrep kube-apiserver
	I0329 19:15:23.012721    8480 kic_runner.go:114] Args: [docker exec --privileged calico-20220329190230-1328 pgrep kube-apiserver]
	I0329 19:15:24.178758    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:27.616880    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:31.115067    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:34.608215    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:38.080482    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:41.558431    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:45.038892    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:48.569377    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:52.026728    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:55.537849    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:15:59.021862    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:02.495144    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:05.988462    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:09.489055    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:12.984805    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:16.470404    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:19.945239    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:23.438284    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:26.893589    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:30.380468    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:33.882069    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:37.406263    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:40.887285    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:44.415620    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:47.928818    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:51.422401    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:54.947503    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:16:58.496053    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:02.038929    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:05.651899    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:09.197542    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:12.741683    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:16.288526    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:19.859687    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:23.396833    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:26.923106    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:30.446639    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:34.014945    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:37.613663    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:41.148177    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:44.677579    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:48.230769    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:51.755996    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:55.313689    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:17:58.847304    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:02.385505    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:05.872918    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:09.406995    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:12.945318    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:16.409205    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:19.899106    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:23.382474    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:26.872676    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:30.368627    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:33.857284    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:37.332844    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:40.830723    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:44.378629    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:47.880266    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:51.402561    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:55.475592    8480 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0329 19:18:55.475667    8480 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0329 19:18:55.493008    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	W0329 19:18:55.977167    8480 delete.go:135] deletehost failed: Docker machine "calico-20220329190230-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 19:18:55.985253    8480 cli_runner.go:133] Run: docker container inspect -f {{.Id}} calico-20220329190230-1328
	I0329 19:18:56.439989    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:57.361748    8480 cli_runner.go:133] Run: docker exec --privileged -t calico-20220329190230-1328 /bin/bash -c "sudo init 0"
	W0329 19:18:58.021082    8480 cli_runner.go:180] docker exec --privileged -t calico-20220329190230-1328 /bin/bash -c "sudo init 0" returned with exit code 1
	I0329 19:18:58.021082    8480 oci.go:656] error shutdown calico-20220329190230-1328: docker exec --privileged -t calico-20220329190230-1328 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3d251233f5370074671f5df3120f2d4d682b1bbedff0e935af14b0628359a359 is not running
	I0329 19:18:59.037810    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:18:59.610170    8480 oci.go:670] temporary error: container calico-20220329190230-1328 status is  but expect it to be exited
	I0329 19:18:59.610170    8480 oci.go:676] Successfully shutdown container calico-20220329190230-1328
	I0329 19:18:59.618050    8480 cli_runner.go:133] Run: docker rm -f -v calico-20220329190230-1328
	I0329 19:19:14.460041    8480 cli_runner.go:186] Completed: docker rm -f -v calico-20220329190230-1328: (14.8417686s)
	I0329 19:19:14.468218    8480 cli_runner.go:133] Run: docker container inspect -f {{.Id}} calico-20220329190230-1328
	W0329 19:19:14.913410    8480 cli_runner.go:180] docker container inspect -f {{.Id}} calico-20220329190230-1328 returned with exit code 1
	I0329 19:19:14.922251    8480 cli_runner.go:133] Run: docker network inspect calico-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 19:19:15.369806    8480 cli_runner.go:180] docker network inspect calico-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:19:15.381638    8480 network_create.go:262] running [docker network inspect calico-20220329190230-1328] to gather additional debugging logs...
	I0329 19:19:15.381638    8480 cli_runner.go:133] Run: docker network inspect calico-20220329190230-1328
	W0329 19:19:15.831932    8480 cli_runner.go:180] docker network inspect calico-20220329190230-1328 returned with exit code 1
	I0329 19:19:15.831932    8480 network_create.go:265] error running [docker network inspect calico-20220329190230-1328]: docker network inspect calico-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220329190230-1328
	I0329 19:19:15.831932    8480 network_create.go:267] output of [docker network inspect calico-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220329190230-1328
	
	** /stderr **
	W0329 19:19:15.832941    8480 delete.go:139] delete failed (probably ok) <nil>
	I0329 19:19:15.832941    8480 fix.go:120] Sleeping 1 second for extra luck!
	I0329 19:19:16.846388    8480 start.go:127] createHost starting for "" (driver="docker")
	I0329 19:19:16.856912    8480 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 19:19:16.857104    8480 start.go:161] libmachine.API.Create for "calico-20220329190230-1328" (driver="docker")
	I0329 19:19:16.857104    8480 client.go:168] LocalClient.Create starting
	I0329 19:19:16.857813    8480 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0329 19:19:16.858061    8480 main.go:130] libmachine: Decoding PEM data...
	I0329 19:19:16.858061    8480 main.go:130] libmachine: Parsing certificate...
	I0329 19:19:16.858353    8480 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0329 19:19:16.858743    8480 main.go:130] libmachine: Decoding PEM data...
	I0329 19:19:16.858775    8480 main.go:130] libmachine: Parsing certificate...
	I0329 19:19:16.870763    8480 cli_runner.go:133] Run: docker network inspect calico-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 19:19:17.341334    8480 cli_runner.go:180] docker network inspect calico-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:19:17.352261    8480 network_create.go:262] running [docker network inspect calico-20220329190230-1328] to gather additional debugging logs...
	I0329 19:19:17.352261    8480 cli_runner.go:133] Run: docker network inspect calico-20220329190230-1328
	W0329 19:19:17.855013    8480 cli_runner.go:180] docker network inspect calico-20220329190230-1328 returned with exit code 1
	I0329 19:19:17.855013    8480 network_create.go:265] error running [docker network inspect calico-20220329190230-1328]: docker network inspect calico-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220329190230-1328
	I0329 19:19:17.855013    8480 network_create.go:267] output of [docker network inspect calico-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220329190230-1328
	
	** /stderr **
	I0329 19:19:17.863806    8480 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 19:19:18.355650    8480 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000388160] amended:false}} dirty:map[] misses:0}
	I0329 19:19:18.355650    8480 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:19:18.355650    8480 network_create.go:114] attempt to create docker network calico-20220329190230-1328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 19:19:18.362785    8480 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220329190230-1328
	W0329 19:19:18.859397    8480 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220329190230-1328 returned with exit code 1
	W0329 19:19:18.859397    8480 network_create.go:106] failed to create docker network calico-20220329190230-1328 192.168.49.0/24, will retry: subnet is taken
	I0329 19:19:18.877392    8480 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000388160] amended:false}} dirty:map[] misses:0}
	I0329 19:19:18.877392    8480 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:19:18.894394    8480 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000388160] amended:true}} dirty:map[192.168.49.0:0xc000388160 192.168.58.0:0xc0003882c0] misses:0}
	I0329 19:19:18.894394    8480 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:19:18.894394    8480 network_create.go:114] attempt to create docker network calico-20220329190230-1328 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0329 19:19:18.902397    8480 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220329190230-1328
	W0329 19:19:19.417391    8480 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220329190230-1328 returned with exit code 1
	W0329 19:19:19.417391    8480 network_create.go:106] failed to create docker network calico-20220329190230-1328 192.168.58.0/24, will retry: subnet is taken
	I0329 19:19:19.437415    8480 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000388160] amended:true}} dirty:map[192.168.49.0:0xc000388160 192.168.58.0:0xc0003882c0] misses:1}
	I0329 19:19:19.437415    8480 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:19:19.456394    8480 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000388160] amended:true}} dirty:map[192.168.49.0:0xc000388160 192.168.58.0:0xc0003882c0 192.168.67.0:0xc00058c4f0] misses:1}
	I0329 19:19:19.457401    8480 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:19:19.457401    8480 network_create.go:114] attempt to create docker network calico-20220329190230-1328 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0329 19:19:19.464399    8480 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220329190230-1328
	I0329 19:19:20.072009    8480 network_create.go:98] docker network calico-20220329190230-1328 192.168.67.0/24 created
	I0329 19:19:20.072009    8480 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220329190230-1328" container
	I0329 19:19:20.094032    8480 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 19:19:20.588950    8480 cli_runner.go:133] Run: docker volume create calico-20220329190230-1328 --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true
	I0329 19:19:21.037409    8480 oci.go:102] Successfully created a docker volume calico-20220329190230-1328
	I0329 19:19:21.044404    8480 cli_runner.go:133] Run: docker run --rm --name calico-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --entrypoint /usr/bin/test -v calico-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 19:19:23.493297    8480 cli_runner.go:186] Completed: docker run --rm --name calico-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --entrypoint /usr/bin/test -v calico-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (2.4488085s)
	I0329 19:19:23.493462    8480 oci.go:106] Successfully prepared a docker volume calico-20220329190230-1328
	I0329 19:19:23.493518    8480 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:19:23.493571    8480 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 19:19:23.504506    8480 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 19:20:02.941375    8480 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (39.4365793s)
	I0329 19:20:02.941375    8480 kic.go:188] duration metric: took 39.447579 seconds to extract preloaded images to volume
	I0329 19:20:02.950366    8480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:20:03.655797    8480 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:73 OomKillDisable:true NGoroutines:55 SystemTime:2022-03-29 19:20:03.3009695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:20:03.670787    8480 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 19:20:04.505960    8480 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.67.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 19:20:07.125783    8480 cli_runner.go:186] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220329190230-1328 --name calico-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220329190230-1328 --network calico-20220329190230-1328 --ip 192.168.67.2 --volume calico-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: (2.619808s)
	I0329 19:20:07.132783    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Running}}
	I0329 19:20:07.686699    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:20:08.195563    8480 cli_runner.go:133] Run: docker exec calico-20220329190230-1328 stat /var/lib/dpkg/alternatives/iptables
	I0329 19:20:09.131450    8480 oci.go:278] the created container "calico-20220329190230-1328" has a running status.
	I0329 19:20:09.131450    8480 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa...
	I0329 19:20:09.236146    8480 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 19:20:09.926543    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:20:10.430334    8480 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 19:20:10.430334    8480 kic_runner.go:114] Args: [docker exec --privileged calico-20220329190230-1328 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 19:20:11.420776    8480 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa...
	I0329 19:20:11.999922    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:20:12.486842    8480 machine.go:88] provisioning docker machine ...
	I0329 19:20:12.486842    8480 ubuntu.go:169] provisioning hostname "calico-20220329190230-1328"
	I0329 19:20:12.493866    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:12.980488    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:12.986488    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:12.986488    8480 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220329190230-1328 && echo "calico-20220329190230-1328" | sudo tee /etc/hostname
	I0329 19:20:13.168844    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220329190230-1328
	
	I0329 19:20:13.176843    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:13.641174    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:13.641174    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:13.641174    8480 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220329190230-1328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220329190230-1328/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220329190230-1328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 19:20:13.803006    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 19:20:13.803006    8480 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0329 19:20:13.804011    8480 ubuntu.go:177] setting up certificates
	I0329 19:20:13.804011    8480 provision.go:83] configureAuth start
	I0329 19:20:13.811185    8480 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220329190230-1328
	I0329 19:20:14.361878    8480 provision.go:138] copyHostCerts
	I0329 19:20:14.361878    8480 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0329 19:20:14.361878    8480 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0329 19:20:14.361878    8480 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0329 19:20:14.363890    8480 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0329 19:20:14.363890    8480 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0329 19:20:14.363890    8480 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0329 19:20:14.365898    8480 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0329 19:20:14.365898    8480 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0329 19:20:14.366898    8480 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0329 19:20:14.367889    8480 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220329190230-1328 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220329190230-1328]
	I0329 19:20:15.034061    8480 provision.go:172] copyRemoteCerts
	I0329 19:20:15.043054    8480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 19:20:15.050058    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:15.547144    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:15.698164    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 19:20:15.749141    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0329 19:20:15.941471    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 19:20:15.996070    8480 provision.go:86] duration metric: configureAuth took 2.1920462s
	I0329 19:20:15.996070    8480 ubuntu.go:193] setting minikube options for container-runtime
	I0329 19:20:15.996070    8480 config.go:176] Loaded profile config "calico-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:20:16.006060    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:16.531382    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:16.532415    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:16.532415    8480 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 19:20:16.666397    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 19:20:16.666397    8480 ubuntu.go:71] root file system type: overlay
	I0329 19:20:16.666397    8480 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 19:20:16.674384    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:17.233976    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:17.233976    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:17.233976    8480 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 19:20:17.453967    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 19:20:17.461970    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:17.997656    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:17.997656    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:17.997656    8480 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 19:20:19.924929    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 19:20:17.412870100 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 19:20:19.924929    8480 machine.go:91] provisioned docker machine in 7.4380452s
	I0329 19:20:19.924929    8480 client.go:171] LocalClient.Create took 1m3.0674663s
	I0329 19:20:19.924929    8480 start.go:169] duration metric: libmachine.API.Create for "calico-20220329190230-1328" took 1m3.0674663s
	I0329 19:20:19.924929    8480 start.go:302] post-start starting for "calico-20220329190230-1328" (driver="docker")
	I0329 19:20:19.924929    8480 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 19:20:19.936920    8480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 19:20:19.943910    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:20.454653    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:20.632781    8480 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 19:20:20.643779    8480 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 19:20:20.643779    8480 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 19:20:20.643779    8480 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 19:20:20.643779    8480 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 19:20:20.643779    8480 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0329 19:20:20.643779    8480 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0329 19:20:20.644786    8480 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem -> 13282.pem in /etc/ssl/certs
	I0329 19:20:20.654774    8480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 19:20:20.683420    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /etc/ssl/certs/13282.pem (1708 bytes)
	I0329 19:20:20.744549    8480 start.go:305] post-start completed in 819.6145ms
	I0329 19:20:20.755547    8480 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220329190230-1328
	I0329 19:20:21.342458    8480 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\config.json ...
	I0329 19:20:21.356305    8480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 19:20:21.368993    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:21.876491    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:22.020474    8480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 19:20:22.031469    8480 start.go:130] duration metric: createHost completed in 1m5.1846735s
	I0329 19:20:22.044469    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	W0329 19:20:22.568660    8480 fix.go:134] unexpected machine state, will restart: <nil>
	I0329 19:20:22.568813    8480 machine.go:88] provisioning docker machine ...
	I0329 19:20:22.568813    8480 ubuntu.go:169] provisioning hostname "calico-20220329190230-1328"
	I0329 19:20:22.585382    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:23.082701    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:23.083712    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:23.083712    8480 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220329190230-1328 && echo "calico-20220329190230-1328" | sudo tee /etc/hostname
	I0329 19:20:23.308355    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220329190230-1328
	
	I0329 19:20:23.320351    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:23.826062    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:23.827060    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:23.827060    8480 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220329190230-1328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220329190230-1328/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220329190230-1328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 19:20:23.961119    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 19:20:23.961119    8480 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0329 19:20:23.961119    8480 ubuntu.go:177] setting up certificates
	I0329 19:20:23.961119    8480 provision.go:83] configureAuth start
	I0329 19:20:23.973127    8480 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220329190230-1328
	I0329 19:20:24.495291    8480 provision.go:138] copyHostCerts
	I0329 19:20:24.495291    8480 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0329 19:20:24.495291    8480 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0329 19:20:24.496278    8480 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0329 19:20:24.497297    8480 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0329 19:20:24.497297    8480 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0329 19:20:24.498296    8480 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0329 19:20:24.499284    8480 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0329 19:20:24.499284    8480 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0329 19:20:24.500276    8480 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0329 19:20:24.501291    8480 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220329190230-1328 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220329190230-1328]
	I0329 19:20:24.689200    8480 provision.go:172] copyRemoteCerts
	I0329 19:20:24.701288    8480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 19:20:24.709168    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:25.204495    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:25.350353    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 19:20:25.400563    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0329 19:20:25.452121    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 19:20:25.511536    8480 provision.go:86] duration metric: configureAuth took 1.5504076s
	I0329 19:20:25.511536    8480 ubuntu.go:193] setting minikube options for container-runtime
	I0329 19:20:25.512551    8480 config.go:176] Loaded profile config "calico-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:20:25.519550    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:25.994759    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:25.994759    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:25.994759    8480 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 19:20:26.173751    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 19:20:26.173751    8480 ubuntu.go:71] root file system type: overlay
	I0329 19:20:26.174760    8480 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 19:20:26.185744    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:26.665919    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:26.666677    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:26.666677    8480 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 19:20:26.887065    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 19:20:26.895059    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:27.434885    8480 main.go:130] libmachine: Using SSH client type: native
	I0329 19:20:27.435631    8480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57467 <nil> <nil>}
	I0329 19:20:27.435723    8480 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 19:20:27.581783    8480 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 19:20:27.581783    8480 machine.go:91] provisioned docker machine in 5.0129421s
	I0329 19:20:27.581783    8480 start.go:302] post-start starting for "calico-20220329190230-1328" (driver="docker")
	I0329 19:20:27.581783    8480 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 19:20:27.593770    8480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 19:20:27.600772    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:28.125359    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:28.285179    8480 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 19:20:28.299260    8480 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 19:20:28.299260    8480 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 19:20:28.299260    8480 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 19:20:28.299260    8480 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 19:20:28.299260    8480 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0329 19:20:28.300251    8480 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0329 19:20:28.301253    8480 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem -> 13282.pem in /etc/ssl/certs
	I0329 19:20:28.310277    8480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 19:20:28.331394    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /etc/ssl/certs/13282.pem (1708 bytes)
	I0329 19:20:28.386004    8480 start.go:305] post-start completed in 804.2164ms
	I0329 19:20:28.394723    8480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 19:20:28.402749    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:28.895166    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:29.053576    8480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 19:20:29.068552    8480 fix.go:57] fixHost completed within 5m9.8645542s
	I0329 19:20:29.068552    8480 start.go:81] releasing machines lock for "calico-20220329190230-1328", held for 5m9.8646736s
	I0329 19:20:29.078537    8480 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220329190230-1328
	I0329 19:20:29.620440    8480 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 19:20:29.628418    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:29.629424    8480 ssh_runner.go:195] Run: sudo service containerd status
	I0329 19:20:29.637409    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:30.150101    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:30.157511    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:20:30.338519    8480 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 19:20:30.387022    8480 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 19:20:30.403036    8480 ssh_runner.go:195] Run: sudo service crio status
	I0329 19:20:30.450030    8480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 19:20:30.503038    8480 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 19:20:30.544027    8480 ssh_runner.go:195] Run: sudo service docker status
	I0329 19:20:30.589020    8480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 19:20:30.702377    8480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 19:20:30.801195    8480 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 19:20:30.813192    8480 cli_runner.go:133] Run: docker exec -t calico-20220329190230-1328 dig +short host.docker.internal
	I0329 19:20:31.788266    8480 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0329 19:20:31.799795    8480 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0329 19:20:31.814785    8480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 19:20:31.850057    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:20:32.388961    8480 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:20:32.396934    8480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 19:20:32.492556    8480 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 19:20:32.492556    8480 docker.go:537] Images already preloaded, skipping extraction
	I0329 19:20:32.503514    8480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 19:20:32.591816    8480 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 19:20:32.591816    8480 cache_images.go:84] Images are preloaded, skipping loading
	I0329 19:20:32.599803    8480 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 19:20:32.837431    8480 cni.go:93] Creating CNI manager for "calico"
	I0329 19:20:32.837431    8480 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 19:20:32.837431    8480 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220329190230-1328 NodeName:calico-20220329190230-1328 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 19:20:32.837431    8480 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220329190230-1328"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 19:20:32.837431    8480 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220329190230-1328 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0329 19:20:32.848421    8480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 19:20:32.879270    8480 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 19:20:32.892273    8480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0329 19:20:32.913484    8480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0329 19:20:32.956595    8480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 19:20:32.998158    8480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0329 19:20:33.043658    8480 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0329 19:20:33.096164    8480 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0329 19:20:33.145148    8480 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0329 19:20:33.155138    8480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 19:20:33.182259    8480 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328 for IP: 192.168.67.2
	I0329 19:20:33.182943    8480 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0329 19:20:33.182943    8480 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0329 19:20:33.183409    8480 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\client.key
	I0329 19:20:33.183409    8480 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\client.crt with IP's: []
	I0329 19:20:33.366560    8480 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\client.crt ...
	I0329 19:20:33.366560    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\client.crt: {Name:mkfdb2418b335cfdf6e9f95234d26d23db9a6564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:33.369595    8480 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\client.key ...
	I0329 19:20:33.369595    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\client.key: {Name:mk8588468f5892d9ff29898123aa9b96e652bd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:33.371696    8480 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.key.c7fa3a9e
	I0329 19:20:33.371945    8480 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 19:20:33.771745    8480 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.crt.c7fa3a9e ...
	I0329 19:20:33.771745    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.crt.c7fa3a9e: {Name:mk539f59627fd3d4a1d6910269de80dc3819807c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:33.772750    8480 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.key.c7fa3a9e ...
	I0329 19:20:33.772750    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.key.c7fa3a9e: {Name:mk2740afae12dc10012532864ac7bd8f7f785de7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:33.773751    8480 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.crt
	I0329 19:20:33.781775    8480 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.key
	I0329 19:20:33.788770    8480 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.key
	I0329 19:20:33.789112    8480 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.crt with IP's: []
	I0329 19:20:33.989098    8480 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.crt ...
	I0329 19:20:33.989098    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.crt: {Name:mk819cbbeed05c84fc2189476bddebcb06eb5f49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:33.990443    8480 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.key ...
	I0329 19:20:33.990443    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.key: {Name:mk9b95377e3e42de8bf5bbd12101b815811c94f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:20:33.999408    8480 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem (1338 bytes)
	W0329 19:20:33.999838    8480 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328_empty.pem, impossibly tiny 0 bytes
	I0329 19:20:33.999838    8480 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0329 19:20:34.000224    8480 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0329 19:20:34.000515    8480 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0329 19:20:34.000742    8480 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0329 19:20:34.000918    8480 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem (1708 bytes)
	I0329 19:20:34.001915    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 19:20:34.063424    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0329 19:20:34.118070    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 19:20:34.169003    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\calico-20220329190230-1328\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 19:20:34.224533    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 19:20:34.276581    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0329 19:20:34.323456    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 19:20:34.380660    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0329 19:20:34.448680    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 19:20:34.505871    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem --> /usr/share/ca-certificates/1328.pem (1338 bytes)
	I0329 19:20:34.556285    8480 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /usr/share/ca-certificates/13282.pem (1708 bytes)
	I0329 19:20:34.622663    8480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 19:20:34.679329    8480 ssh_runner.go:195] Run: openssl version
	I0329 19:20:34.711808    8480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 19:20:34.753478    8480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:20:34.769796    8480 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:18 /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:20:34.779792    8480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:20:34.801801    8480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 19:20:34.836915    8480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1328.pem && ln -fs /usr/share/ca-certificates/1328.pem /etc/ssl/certs/1328.pem"
	I0329 19:20:34.872724    8480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1328.pem
	I0329 19:20:34.892740    8480 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:29 /usr/share/ca-certificates/1328.pem
	I0329 19:20:34.901721    8480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1328.pem
	I0329 19:20:34.923721    8480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1328.pem /etc/ssl/certs/51391683.0"
	I0329 19:20:34.967399    8480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13282.pem && ln -fs /usr/share/ca-certificates/13282.pem /etc/ssl/certs/13282.pem"
	I0329 19:20:35.006922    8480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13282.pem
	I0329 19:20:35.021580    8480 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:29 /usr/share/ca-certificates/13282.pem
	I0329 19:20:35.030553    8480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13282.pem
	I0329 19:20:35.055539    8480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13282.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 19:20:35.077538    8480 kubeadm.go:391] StartCluster: {Name:calico-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false}
	I0329 19:20:35.084541    8480 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 19:20:35.168923    8480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 19:20:35.203782    8480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 19:20:35.223785    8480 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 19:20:35.235793    8480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 19:20:35.254783    8480 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 19:20:35.254783    8480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 19:21:07.742321    8480 out.go:203]   - Generating certificates and keys ...
	I0329 19:21:07.761314    8480 out.go:203]   - Booting up control plane ...
	I0329 19:21:07.779887    8480 out.go:203]   - Configuring RBAC rules ...
	I0329 19:21:07.786887    8480 cni.go:93] Creating CNI manager for "calico"
	I0329 19:21:07.794881    8480 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0329 19:21:07.795892    8480 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 19:21:07.795892    8480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0329 19:21:07.852870    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 19:21:14.912189    8480 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (7.0592786s)
	I0329 19:21:14.913196    8480 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 19:21:14.926192    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:14.927193    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=calico-20220329190230-1328 minikube.k8s.io/updated_at=2022_03_29T19_21_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:14.930210    8480 ops.go:34] apiserver oom_adj: -16
	I0329 19:21:15.221306    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:15.923801    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:16.431939    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:16.928610    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:17.436475    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:18.432182    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:18.927937    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:20.075989    8480 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.1480462s)
	I0329 19:21:20.429170    8480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:21:21.474311    8480 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.0451355s)
	I0329 19:21:21.474311    8480 kubeadm.go:1020] duration metric: took 6.5610772s to wait for elevateKubeSystemPrivileges.
	I0329 19:21:21.474311    8480 kubeadm.go:393] StartCluster complete in 46.3965064s
	I0329 19:21:21.474311    8480 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:21:21.474311    8480 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:21:21.481801    8480 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:21:22.181222    8480 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220329190230-1328" rescaled to 1
	I0329 19:21:22.181366    8480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 19:21:22.181366    8480 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 19:21:22.181366    8480 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:21:22.181366    8480 addons.go:65] Setting storage-provisioner=true in profile "calico-20220329190230-1328"
	I0329 19:21:22.181366    8480 addons.go:153] Setting addon storage-provisioner=true in "calico-20220329190230-1328"
	W0329 19:21:22.188995    8480 addons.go:165] addon storage-provisioner should already be in state true
	I0329 19:21:22.181366    8480 addons.go:65] Setting default-storageclass=true in profile "calico-20220329190230-1328"
	I0329 19:21:22.181366    8480 config.go:176] Loaded profile config "calico-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:21:22.188940    8480 out.go:176] * Verifying Kubernetes components...
	I0329 19:21:22.188995    8480 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220329190230-1328"
	I0329 19:21:22.189182    8480 host.go:66] Checking if "calico-20220329190230-1328" exists ...
	I0329 19:21:22.209465    8480 ssh_runner.go:195] Run: sudo service kubelet status
	I0329 19:21:22.220488    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:21:22.221491    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:21:22.582873    8480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 19:21:22.596853    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:21:22.835955    8480 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 19:21:22.835955    8480 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:21:22.835955    8480 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 19:21:22.843963    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:21:22.885269    8480 addons.go:153] Setting addon default-storageclass=true in "calico-20220329190230-1328"
	W0329 19:21:22.885269    8480 addons.go:165] addon default-storageclass should already be in state true
	I0329 19:21:22.885269    8480 host.go:66] Checking if "calico-20220329190230-1328" exists ...
	I0329 19:21:22.910983    8480 cli_runner.go:133] Run: docker container inspect calico-20220329190230-1328 --format={{.State.Status}}
	I0329 19:21:23.280962    8480 node_ready.go:35] waiting up to 5m0s for node "calico-20220329190230-1328" to be "Ready" ...
	I0329 19:21:23.286974    8480 node_ready.go:49] node "calico-20220329190230-1328" has status "Ready":"True"
	I0329 19:21:23.286974    8480 node_ready.go:38] duration metric: took 6.0119ms waiting for node "calico-20220329190230-1328" to be "Ready" ...
	I0329 19:21:23.286974    8480 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 19:21:23.380811    8480 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace to be "Ready" ...
	I0329 19:21:23.511348    8480 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 19:21:23.511348    8480 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 19:21:23.523591    8480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220329190230-1328
	I0329 19:21:23.526872    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:21:24.092020    8480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57467 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\calico-20220329190230-1328\id_rsa Username:docker}
	I0329 19:21:24.200152    8480 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:21:24.992276    8480 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 19:21:25.589269    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:28.169767    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:28.982983    8480 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.4000729s)
	I0329 19:21:28.982983    8480 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0329 19:21:29.568383    8480 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.3682001s)
	I0329 19:21:29.569143    8480 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.5762633s)
	I0329 19:21:29.899322    8480 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 19:21:29.900342    8480 addons.go:417] enableAddons completed in 7.7189313s
	I0329 19:21:30.487450    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:32.581636    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:35.169613    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:37.576314    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:39.804637    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:42.010669    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:53.882864    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:56.045501    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:21:58.073149    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:00.525490    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:03.083233    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:05.580131    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:08.067578    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:10.612505    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:13.015018    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:15.073428    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:17.267048    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:19.569672    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:21.775890    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:24.081088    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:26.582720    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:30.473629    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:32.521910    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:35.900736    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:38.087027    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:41.601973    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:46.755229    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:49.174110    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:51.672480    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:54.196167    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:56.578693    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:22:59.023628    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:01.084067    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:03.100815    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:05.523801    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:07.590012    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:10.869177    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:13.069654    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:15.537845    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:18.028950    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:20.083704    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:22.525560    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:24.737014    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:27.027431    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:29.532675    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:31.572851    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:34.076936    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:36.079345    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:38.521449    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:40.586222    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:43.010747    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:45.081379    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:47.087581    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:49.518260    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:51.571110    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:54.025192    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:56.578855    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:23:59.025314    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:01.194361    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:03.585276    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:06.071107    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:08.524328    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:11.092116    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:13.570594    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:16.020848    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:18.515972    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:20.521309    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:22.584999    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:25.082399    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:27.534106    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:29.592667    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:32.080311    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:34.585671    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:37.021189    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:39.520247    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:42.015880    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:44.084031    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:46.518648    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:48.583048    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:50.672512    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:53.082349    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:55.084901    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:57.526371    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:24:59.574166    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:02.070577    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:04.071131    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:06.095480    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:08.571347    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:10.591554    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:13.022182    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:15.070077    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:17.571278    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:20.082159    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:22.520216    8480 pod_ready.go:102] pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:23.582426    8480 pod_ready.go:81] duration metric: took 4m0.2002467s waiting for pod "calico-kube-controllers-8594699699-lbr9j" in "kube-system" namespace to be "Ready" ...
	E0329 19:25:23.582426    8480 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 19:25:23.582426    8480 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-rln8z" in "kube-system" namespace to be "Ready" ...
	I0329 19:25:25.628737    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:27.668193    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:29.671325    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:32.170355    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:34.697874    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:37.126778    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:39.192320    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:41.202852    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:43.687775    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:46.269918    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:48.625905    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:50.772844    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:53.171569    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:55.185419    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:57.673479    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:25:59.685693    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:02.184975    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:04.187098    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:06.702454    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:09.172671    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:11.189718    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:13.686454    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:16.131421    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:18.186302    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:20.190559    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:22.194134    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:24.773881    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:27.128371    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:29.182855    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:31.193699    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:33.770368    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:36.190737    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:38.620832    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:40.683568    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:43.191163    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:45.680860    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:47.685881    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:50.132567    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:52.168805    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:54.694978    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:57.130191    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:26:59.135151    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:01.186497    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:03.189411    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:05.675341    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:08.181614    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:10.674678    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:12.686342    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:14.695164    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:17.187003    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:19.625887    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:21.628575    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:23.768855    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:26.185621    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:28.635834    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:30.685893    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:32.697563    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:35.127457    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:37.629896    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:40.134835    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:42.185200    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:44.188902    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:46.694406    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:49.145568    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:51.272259    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:53.687038    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:56.140427    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:27:58.629642    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:01.275196    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:03.702519    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:05.776956    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:08.179635    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:10.188941    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:12.673623    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:14.688210    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:17.186413    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:19.192662    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:30.974746    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:33.284942    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:35.688753    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:38.185390    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:40.623998    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:42.626018    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:44.680152    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:46.692264    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:49.183742    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:51.687622    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:54.186338    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:56.672989    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:28:58.688656    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:00.690130    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:02.836975    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:05.126154    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:07.145245    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:09.181786    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:11.199467    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:13.637096    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:16.127387    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:18.136970    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:30.474962    8480 pod_ready.go:102] pod "calico-node-rln8z" in "kube-system" namespace has status "Ready":"False"
	I0329 19:29:30.880614    8480 pod_ready.go:81] duration metric: took 4m7.2968204s waiting for pod "calico-node-rln8z" in "kube-system" namespace to be "Ready" ...
	E0329 19:29:30.880614    8480 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 19:29:30.880614    8480 pod_ready.go:38] duration metric: took 8m7.5909037s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 19:29:30.891590    8480 out.go:176] 
	W0329 19:29:30.891590    8480 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0329 19:29:30.891590    8480 out.go:241] * 
	* 
	W0329 19:29:30.893683    8480 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0329 19:29:30.899586    8480 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (915.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (359.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 80 (5m58.8881067s)

                                                
                                                
-- stdout --
	* [kindnet-20220329190230-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20220329190230-1328 in cluster kindnet-20220329190230-1328
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 19:21:55.571459    3060 out.go:297] Setting OutFile to fd 1852 ...
	I0329 19:21:55.639464    3060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:21:55.639464    3060 out.go:310] Setting ErrFile to fd 1908...
	I0329 19:21:55.639464    3060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 19:21:55.654455    3060 out.go:304] Setting JSON to false
	I0329 19:21:55.656455    3060 start.go:114] hostinfo: {"hostname":"minikube8","uptime":8912,"bootTime":1648572803,"procs":153,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 19:21:55.656455    3060 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 19:21:55.666473    3060 out.go:176] * [kindnet-20220329190230-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 19:21:55.666473    3060 notify.go:193] Checking for updates...
	I0329 19:21:55.678467    3060 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:21:55.686467    3060 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 19:21:55.690471    3060 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 19:21:55.693498    3060 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 19:21:55.695484    3060 config.go:176] Loaded profile config "calico-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:21:55.695484    3060 config.go:176] Loaded profile config "cert-expiration-20220329190729-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:21:55.696486    3060 config.go:176] Loaded profile config "cilium-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:21:55.696486    3060 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 19:21:58.022478    3060 docker.go:137] docker version: linux-20.10.13
	I0329 19:21:58.030485    3060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:21:58.783447    3060 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:83 OomKillDisable:true NGoroutines:60 SystemTime:2022-03-29 19:21:58.4022559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:21:58.789286    3060 out.go:176] * Using the docker driver based on user configuration
	I0329 19:21:58.789286    3060 start.go:283] selected driver: docker
	I0329 19:21:58.789286    3060 start.go:800] validating driver "docker" against <nil>
	I0329 19:21:58.789286    3060 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 19:21:58.923237    3060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:21:59.760623    3060 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:84 OomKillDisable:true NGoroutines:61 SystemTime:2022-03-29 19:21:59.3543198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:21:59.760623    3060 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 19:21:59.761550    3060 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 19:21:59.761550    3060 cni.go:93] Creating CNI manager for "kindnet"
	I0329 19:21:59.761550    3060 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 19:21:59.761550    3060 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0329 19:21:59.761550    3060 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0329 19:21:59.761550    3060 start_flags.go:306] config:
	{Name:kindnet-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 19:21:59.767551    3060 out.go:176] * Starting control plane node kindnet-20220329190230-1328 in cluster kindnet-20220329190230-1328
	I0329 19:21:59.767551    3060 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 19:21:59.777555    3060 out.go:176] * Pulling base image ...
	I0329 19:21:59.777555    3060 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:21:59.777555    3060 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 19:21:59.778595    3060 preload.go:148] Found local preload: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 19:21:59.778595    3060 cache.go:57] Caching tarball of preloaded images
	I0329 19:21:59.778595    3060 preload.go:174] Found C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 19:21:59.778595    3060 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 19:21:59.779558    3060 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\config.json ...
	I0329 19:21:59.779558    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\config.json: {Name:mk6dcdefc191c30bb34c1c8319cc8490444e173c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:22:00.328900    3060 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 19:22:00.328900    3060 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 19:22:00.328900    3060 cache.go:208] Successfully downloaded all kic artifacts
	I0329 19:22:00.328900    3060 start.go:348] acquiring machines lock for kindnet-20220329190230-1328: {Name:mk93919b231bfab46578efb1f64d7a60b9cbb338 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 19:22:00.328900    3060 start.go:352] acquired machines lock for "kindnet-20220329190230-1328" in 0s
	I0329 19:22:00.328900    3060 start.go:90] Provisioning new machine with config: &{Name:kindnet-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220329190230-1328 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:22:00.330945    3060 start.go:127] createHost starting for "" (driver="docker")
	I0329 19:22:00.336920    3060 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 19:22:00.336920    3060 start.go:161] libmachine.API.Create for "kindnet-20220329190230-1328" (driver="docker")
	I0329 19:22:00.336920    3060 client.go:168] LocalClient.Create starting
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Decoding PEM data...
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Parsing certificate...
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Reading certificate data from C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Decoding PEM data...
	I0329 19:22:00.337911    3060 main.go:130] libmachine: Parsing certificate...
	I0329 19:22:00.347908    3060 cli_runner.go:133] Run: docker network inspect kindnet-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 19:22:00.882273    3060 cli_runner.go:180] docker network inspect kindnet-20220329190230-1328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 19:22:00.895271    3060 network_create.go:262] running [docker network inspect kindnet-20220329190230-1328] to gather additional debugging logs...
	I0329 19:22:00.895271    3060 cli_runner.go:133] Run: docker network inspect kindnet-20220329190230-1328
	W0329 19:22:01.452273    3060 cli_runner.go:180] docker network inspect kindnet-20220329190230-1328 returned with exit code 1
	I0329 19:22:01.452273    3060 network_create.go:265] error running [docker network inspect kindnet-20220329190230-1328]: docker network inspect kindnet-20220329190230-1328: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220329190230-1328
	I0329 19:22:01.452273    3060 network_create.go:267] output of [docker network inspect kindnet-20220329190230-1328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220329190230-1328
	
	** /stderr **
	I0329 19:22:01.461276    3060 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 19:22:02.040411    3060 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e470] misses:0}
	I0329 19:22:02.041403    3060 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 19:22:02.041403    3060 network_create.go:114] attempt to create docker network kindnet-20220329190230-1328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 19:22:02.048399    3060 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220329190230-1328
	I0329 19:22:02.814865    3060 network_create.go:98] docker network kindnet-20220329190230-1328 192.168.49.0/24 created
	I0329 19:22:02.814865    3060 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20220329190230-1328" container
	I0329 19:22:02.828851    3060 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 19:22:03.395451    3060 cli_runner.go:133] Run: docker volume create kindnet-20220329190230-1328 --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true
	I0329 19:22:03.935415    3060 oci.go:102] Successfully created a docker volume kindnet-20220329190230-1328
	I0329 19:22:03.947282    3060 cli_runner.go:133] Run: docker run --rm --name kindnet-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --entrypoint /usr/bin/test -v kindnet-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 19:22:07.297404    3060 cli_runner.go:186] Completed: docker run --rm --name kindnet-20220329190230-1328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --entrypoint /usr/bin/test -v kindnet-20220329190230-1328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (3.3496831s)
	I0329 19:22:07.297404    3060 oci.go:106] Successfully prepared a docker volume kindnet-20220329190230-1328
	I0329 19:22:07.297553    3060 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:22:07.297553    3060 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 19:22:07.306519    3060 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 19:22:37.957838    3060 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220329190230-1328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (30.6511432s)
	I0329 19:22:37.957838    3060 kic.go:188] duration metric: took 30.660109 seconds to extract preloaded images to volume
	I0329 19:22:37.964819    3060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 19:22:38.713459    3060 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:55 SystemTime:2022-03-29 19:22:38.3368398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 19:22:38.720456    3060 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 19:22:39.511441    3060 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220329190230-1328 --name kindnet-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --network kindnet-20220329190230-1328 --ip 192.168.49.2 --volume kindnet-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 19:22:48.992610    3060 cli_runner.go:186] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220329190230-1328 --name kindnet-20220329190230-1328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220329190230-1328 --network kindnet-20220329190230-1328 --ip 192.168.49.2 --volume kindnet-20220329190230-1328:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: (9.4811191s)
	I0329 19:22:49.002201    3060 cli_runner.go:133] Run: docker container inspect kindnet-20220329190230-1328 --format={{.State.Running}}
	I0329 19:22:49.539668    3060 cli_runner.go:133] Run: docker container inspect kindnet-20220329190230-1328 --format={{.State.Status}}
	I0329 19:22:50.108163    3060 cli_runner.go:133] Run: docker exec kindnet-20220329190230-1328 stat /var/lib/dpkg/alternatives/iptables
	I0329 19:22:51.071482    3060 oci.go:278] the created container "kindnet-20220329190230-1328" has a running status.
	I0329 19:22:51.071528    3060 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa...
	I0329 19:22:51.692129    3060 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 19:22:52.348130    3060 cli_runner.go:133] Run: docker container inspect kindnet-20220329190230-1328 --format={{.State.Status}}
	I0329 19:22:52.936513    3060 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 19:22:52.936513    3060 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220329190230-1328 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 19:22:53.976264    3060 kic_runner.go:123] Done: [docker exec --privileged kindnet-20220329190230-1328 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.0396964s)
	I0329 19:22:53.983468    3060 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa...
	I0329 19:22:54.599526    3060 cli_runner.go:133] Run: docker container inspect kindnet-20220329190230-1328 --format={{.State.Status}}
	I0329 19:22:55.137762    3060 machine.go:88] provisioning docker machine ...
	I0329 19:22:55.138012    3060 ubuntu.go:169] provisioning hostname "kindnet-20220329190230-1328"
	I0329 19:22:55.149905    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:22:55.697013    3060 main.go:130] libmachine: Using SSH client type: native
	I0329 19:22:55.703979    3060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57705 <nil> <nil>}
	I0329 19:22:55.703979    3060 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20220329190230-1328 && echo "kindnet-20220329190230-1328" | sudo tee /etc/hostname
	I0329 19:22:55.934961    3060 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20220329190230-1328
	
	I0329 19:22:55.941957    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:22:56.531652    3060 main.go:130] libmachine: Using SSH client type: native
	I0329 19:22:56.532661    3060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57705 <nil> <nil>}
	I0329 19:22:56.532661    3060 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220329190230-1328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220329190230-1328/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220329190230-1328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 19:22:56.683664    3060 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 19:22:56.683664    3060 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube8\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube8\minikube-integration\.minikube}
	I0329 19:22:56.683664    3060 ubuntu.go:177] setting up certificates
	I0329 19:22:56.683664    3060 provision.go:83] configureAuth start
	I0329 19:22:56.696646    3060 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220329190230-1328
	I0329 19:22:57.208255    3060 provision.go:138] copyHostCerts
	I0329 19:22:57.208255    3060 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem, removing ...
	I0329 19:22:57.208255    3060 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.pem
	I0329 19:22:57.208255    3060 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0329 19:22:57.210265    3060 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem, removing ...
	I0329 19:22:57.210265    3060 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cert.pem
	I0329 19:22:57.210265    3060 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0329 19:22:57.211256    3060 exec_runner.go:144] found C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem, removing ...
	I0329 19:22:57.211256    3060 exec_runner.go:207] rm: C:\Users\jenkins.minikube8\minikube-integration\.minikube\key.pem
	I0329 19:22:57.212272    3060 exec_runner.go:151] cp: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube8\minikube-integration\.minikube/key.pem (1679 bytes)
	I0329 19:22:57.213268    3060 provision.go:112] generating server cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-20220329190230-1328 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220329190230-1328]
	I0329 19:22:57.381748    3060 provision.go:172] copyRemoteCerts
	I0329 19:22:57.401516    3060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 19:22:57.412047    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:22:57.942734    3060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57705 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa Username:docker}
	I0329 19:22:58.033732    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 19:22:58.111846    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0329 19:22:58.173394    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 19:22:58.249245    3060 provision.go:86] duration metric: configureAuth took 1.5655725s
	I0329 19:22:58.249245    3060 ubuntu.go:193] setting minikube options for container-runtime
	I0329 19:22:58.250243    3060 config.go:176] Loaded profile config "kindnet-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:22:58.257240    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:22:58.823437    3060 main.go:130] libmachine: Using SSH client type: native
	I0329 19:22:58.824479    3060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57705 <nil> <nil>}
	I0329 19:22:58.824479    3060 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 19:22:58.967217    3060 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 19:22:58.967282    3060 ubuntu.go:71] root file system type: overlay
	I0329 19:22:58.967590    3060 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 19:22:58.978749    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:22:59.487047    3060 main.go:130] libmachine: Using SSH client type: native
	I0329 19:22:59.488038    3060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57705 <nil> <nil>}
	I0329 19:22:59.488038    3060 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 19:22:59.729056    3060 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 19:22:59.737060    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:00.266727    3060 main.go:130] libmachine: Using SSH client type: native
	I0329 19:23:00.266727    3060 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x766c80] 0x769b40 <nil>  [] 0s} 127.0.0.1 57705 <nil> <nil>}
	I0329 19:23:00.266727    3060 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 19:23:02.086037    3060 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 19:22:59.682870100 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 19:23:02.086255    3060 machine.go:91] provisioned docker machine in 6.9482488s
	I0329 19:23:02.086255    3060 client.go:171] LocalClient.Create took 1m1.7489871s
	I0329 19:23:02.086372    3060 start.go:169] duration metric: libmachine.API.Create for "kindnet-20220329190230-1328" took 1m1.7491039s
	I0329 19:23:02.086372    3060 start.go:302] post-start starting for "kindnet-20220329190230-1328" (driver="docker")
	I0329 19:23:02.086372    3060 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 19:23:02.107301    3060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 19:23:02.118388    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:02.630798    3060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57705 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa Username:docker}
	I0329 19:23:02.860916    3060 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 19:23:02.878108    3060 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 19:23:02.878234    3060 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 19:23:02.878234    3060 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 19:23:02.878234    3060 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 19:23:02.878380    3060 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\addons for local assets ...
	I0329 19:23:02.878717    3060 filesync.go:126] Scanning C:\Users\jenkins.minikube8\minikube-integration\.minikube\files for local assets ...
	I0329 19:23:02.879229    3060 filesync.go:149] local asset: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem -> 13282.pem in /etc/ssl/certs
	I0329 19:23:02.894025    3060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 19:23:02.921040    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /etc/ssl/certs/13282.pem (1708 bytes)
	I0329 19:23:02.997410    3060 start.go:305] post-start completed in 911.0325ms
	I0329 19:23:03.009459    3060 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220329190230-1328
	I0329 19:23:03.524463    3060 profile.go:148] Saving config to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\config.json ...
	I0329 19:23:03.537717    3060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 19:23:03.544715    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:04.121075    3060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57705 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa Username:docker}
	I0329 19:23:04.273300    3060 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 19:23:04.303410    3060 start.go:130] duration metric: createHost completed in 1m3.9721041s
	I0329 19:23:04.303410    3060 start.go:81] releasing machines lock for "kindnet-20220329190230-1328", held for 1m3.9741489s
	I0329 19:23:04.316591    3060 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220329190230-1328
	I0329 19:23:04.808219    3060 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 19:23:04.822445    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:04.826610    3060 ssh_runner.go:195] Run: systemctl --version
	I0329 19:23:04.838362    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:05.293806    3060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57705 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa Username:docker}
	I0329 19:23:05.324724    3060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57705 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa Username:docker}
	I0329 19:23:05.484787    3060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 19:23:05.595816    3060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 19:23:05.623784    3060 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 19:23:05.633784    3060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 19:23:05.657777    3060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 19:23:05.708418    3060 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 19:23:05.919498    3060 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 19:23:06.087191    3060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 19:23:06.129201    3060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 19:23:06.260400    3060 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 19:23:06.307696    3060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 19:23:06.449111    3060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 19:23:06.583964    3060 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 19:23:06.595160    3060 cli_runner.go:133] Run: docker exec -t kindnet-20220329190230-1328 dig +short host.docker.internal
	I0329 19:23:07.599355    3060 cli_runner.go:186] Completed: docker exec -t kindnet-20220329190230-1328 dig +short host.docker.internal: (1.003147s)
	I0329 19:23:07.599391    3060 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0329 19:23:07.611213    3060 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0329 19:23:07.627382    3060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 19:23:07.660726    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:08.198206    3060 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0329 19:23:08.198206    3060 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 19:23:08.206959    3060 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 19:23:08.288009    3060 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 19:23:08.288009    3060 docker.go:537] Images already preloaded, skipping extraction
	I0329 19:23:08.299405    3060 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 19:23:08.378949    3060 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 19:23:08.378949    3060 cache_images.go:84] Images are preloaded, skipping loading
	I0329 19:23:08.392008    3060 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 19:23:08.631306    3060 cni.go:93] Creating CNI manager for "kindnet"
	I0329 19:23:08.631306    3060 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 19:23:08.631306    3060 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220329190230-1328 NodeName:kindnet-20220329190230-1328 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 19:23:08.631306    3060 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220329190230-1328"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 19:23:08.631306    3060 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220329190230-1328 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0329 19:23:08.642705    3060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 19:23:08.680987    3060 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 19:23:08.693389    3060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 19:23:08.718410    3060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (405 bytes)
	I0329 19:23:08.757395    3060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 19:23:08.795408    3060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0329 19:23:08.876871    3060 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 19:23:08.890904    3060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 19:23:08.917865    3060 certs.go:54] Setting up C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328 for IP: 192.168.49.2
	I0329 19:23:08.918867    3060 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key
	I0329 19:23:08.918867    3060 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key
	I0329 19:23:08.918867    3060 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\client.key
	I0329 19:23:08.919881    3060 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\client.crt with IP's: []
	I0329 19:23:09.342929    3060 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\client.crt ...
	I0329 19:23:09.342929    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\client.crt: {Name:mk8d20b52f8bc48248d53f04bd73a17e24aeadd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:09.344929    3060 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\client.key ...
	I0329 19:23:09.344929    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\client.key: {Name:mk486aaac6296d4a8ddd9a865f848b78f175d278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:09.345928    3060 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.key.dd3b5fb2
	I0329 19:23:09.346479    3060 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 19:23:09.424515    3060 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.crt.dd3b5fb2 ...
	I0329 19:23:09.424515    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.crt.dd3b5fb2: {Name:mkd14b73c2e0e093f96401df1121859f98cb5aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:09.426521    3060 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.key.dd3b5fb2 ...
	I0329 19:23:09.426521    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.key.dd3b5fb2: {Name:mk7622f69cdf2d31697f153e792e86c761f3ec80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:09.427833    3060 certs.go:320] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.crt
	I0329 19:23:09.435188    3060 certs.go:324] copying C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.key
	I0329 19:23:09.437177    3060 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.key
	I0329 19:23:09.437837    3060 crypto.go:68] Generating cert C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.crt with IP's: []
	I0329 19:23:10.084401    3060 crypto.go:156] Writing cert to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.crt ...
	I0329 19:23:10.084401    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.crt: {Name:mkf453179fea35c753d0cfec1d91e0de951a558d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:10.086259    3060 crypto.go:164] Writing key to C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.key ...
	I0329 19:23:10.086355    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.key: {Name:mkf519f1d244344c0e7af7de57e836eca8e689f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:10.096031    3060 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem (1338 bytes)
	W0329 19:23:10.096745    3060 certs.go:384] ignoring C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328_empty.pem, impossibly tiny 0 bytes
	I0329 19:23:10.097449    3060 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0329 19:23:10.097635    3060 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0329 19:23:10.098394    3060 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0329 19:23:10.098394    3060 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0329 19:23:10.099393    3060 certs.go:388] found cert: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem (1708 bytes)
	I0329 19:23:10.104418    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 19:23:10.172717    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 19:23:10.234421    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 19:23:10.330276    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kindnet-20220329190230-1328\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 19:23:10.385578    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 19:23:10.502565    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0329 19:23:10.561030    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 19:23:10.618095    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0329 19:23:10.709521    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 19:23:10.851257    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\certs\1328.pem --> /usr/share/ca-certificates/1328.pem (1338 bytes)
	I0329 19:23:10.916400    3060 ssh_runner.go:362] scp C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\ssl\certs\13282.pem --> /usr/share/ca-certificates/13282.pem (1708 bytes)
	I0329 19:23:10.985362    3060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 19:23:11.042008    3060 ssh_runner.go:195] Run: openssl version
	I0329 19:23:11.093015    3060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 19:23:11.128020    3060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:23:11.156375    3060 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:18 /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:23:11.171411    3060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 19:23:11.199812    3060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 19:23:11.230821    3060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1328.pem && ln -fs /usr/share/ca-certificates/1328.pem /etc/ssl/certs/1328.pem"
	I0329 19:23:11.274205    3060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1328.pem
	I0329 19:23:11.295173    3060 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:29 /usr/share/ca-certificates/1328.pem
	I0329 19:23:11.307144    3060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1328.pem
	I0329 19:23:11.329144    3060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1328.pem /etc/ssl/certs/51391683.0"
	I0329 19:23:11.363442    3060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13282.pem && ln -fs /usr/share/ca-certificates/13282.pem /etc/ssl/certs/13282.pem"
	I0329 19:23:11.401037    3060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13282.pem
	I0329 19:23:11.415043    3060 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:29 /usr/share/ca-certificates/13282.pem
	I0329 19:23:11.429029    3060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13282.pem
	I0329 19:23:11.456547    3060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13282.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 19:23:11.488040    3060 kubeadm.go:391] StartCluster: {Name:kindnet-20220329190230-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220329190230-1328 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 19:23:11.498122    3060 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 19:23:11.588586    3060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 19:23:11.637998    3060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 19:23:11.667636    3060 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 19:23:11.686631    3060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 19:23:11.713648    3060 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 19:23:11.713648    3060 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 19:23:38.476575    3060 out.go:203]   - Generating certificates and keys ...
	I0329 19:23:38.490452    3060 out.go:203]   - Booting up control plane ...
	I0329 19:23:38.496470    3060 out.go:203]   - Configuring RBAC rules ...
	I0329 19:23:38.501484    3060 cni.go:93] Creating CNI manager for "kindnet"
	I0329 19:23:38.509470    3060 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0329 19:23:38.523448    3060 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0329 19:23:38.538463    3060 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 19:23:38.538463    3060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0329 19:23:38.622211    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 19:23:42.374159    3060 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.751926s)
	I0329 19:23:42.374159    3060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 19:23:42.396521    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=kindnet-20220329190230-1328 minikube.k8s.io/updated_at=2022_03_29T19_23_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:42.402701    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:42.422010    3060 ops.go:34] apiserver oom_adj: -16
	I0329 19:23:42.694441    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:43.343885    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:43.830046    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:44.336998    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:44.835900    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:45.344299    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:45.840699    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:46.335935    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:46.836539    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:47.337770    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:47.837638    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:48.340266    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:48.842945    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:49.334117    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:49.836884    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:50.341175    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:50.836254    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:51.838278    3060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 19:23:52.488480    3060 kubeadm.go:1020] duration metric: took 10.1142631s to wait for elevateKubeSystemPrivileges.
	I0329 19:23:52.488659    3060 kubeadm.go:393] StartCluster complete in 41.0003812s
	I0329 19:23:52.488659    3060 settings.go:142] acquiring lock: {Name:mkef8bbc389dbb185414693c85b2ca1f1524f773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:52.488659    3060 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 19:23:52.493590    3060 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube8\minikube-integration\kubeconfig: {Name:mkae4c781fbfb916db801be8b13665a6fdce8de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 19:23:53.109527    3060 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220329190230-1328" rescaled to 1
	I0329 19:23:53.109527    3060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 19:23:53.109527    3060 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 19:23:53.109527    3060 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 19:23:53.112798    3060 out.go:176] * Verifying Kubernetes components...
	I0329 19:23:53.109527    3060 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220329190230-1328"
	I0329 19:23:53.109527    3060 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220329190230-1328"
	I0329 19:23:53.110639    3060 config.go:176] Loaded profile config "kindnet-20220329190230-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 19:23:53.112953    3060 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220329190230-1328"
	W0329 19:23:53.113002    3060 addons.go:165] addon storage-provisioner should already be in state true
	I0329 19:23:53.113178    3060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220329190230-1328"
	I0329 19:23:53.113236    3060 host.go:66] Checking if "kindnet-20220329190230-1328" exists ...
	I0329 19:23:53.133498    3060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 19:23:53.137497    3060 cli_runner.go:133] Run: docker container inspect kindnet-20220329190230-1328 --format={{.State.Status}}
	I0329 19:23:53.137497    3060 cli_runner.go:133] Run: docker container inspect kindnet-20220329190230-1328 --format={{.State.Status}}
	I0329 19:23:53.516548    3060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 19:23:53.527511    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:53.732522    3060 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 19:23:53.732522    3060 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:23:53.732522    3060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 19:23:53.739508    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:53.789002    3060 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220329190230-1328"
	W0329 19:23:53.789002    3060 addons.go:165] addon default-storageclass should already be in state true
	I0329 19:23:53.789002    3060 host.go:66] Checking if "kindnet-20220329190230-1328" exists ...
	I0329 19:23:53.823436    3060 cli_runner.go:133] Run: docker container inspect kindnet-20220329190230-1328 --format={{.State.Status}}
	I0329 19:23:54.133170    3060 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220329190230-1328" to be "Ready" ...
	I0329 19:23:54.337165    3060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57705 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa Username:docker}
	I0329 19:23:54.397185    3060 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0329 19:23:54.434192    3060 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 19:23:54.435186    3060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 19:23:54.442172    3060 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220329190230-1328
	I0329 19:23:54.646532    3060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 19:23:55.081809    3060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57705 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\kindnet-20220329190230-1328\id_rsa Username:docker}
	I0329 19:23:55.622810    3060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 19:23:55.870457    3060 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2239176s)
	I0329 19:23:56.193618    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:23:56.432726    3060 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 19:23:56.432726    3060 addons.go:417] enableAddons completed in 3.3231798s
	I0329 19:23:58.197313    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:00.268863    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:02.697888    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:04.768223    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:07.197890    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:09.687506    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:11.691956    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:14.195276    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:16.285787    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:18.684974    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:20.687122    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:22.691845    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:25.185597    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:27.187221    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:29.194637    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:31.699031    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:34.188588    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:36.189987    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:38.192670    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:40.687176    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:42.699613    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:45.184868    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:47.196974    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:49.689304    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:51.689401    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:53.694666    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:56.188073    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:24:58.189403    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:00.198336    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:02.688426    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:04.693976    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:07.197217    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:09.200700    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:11.694278    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:14.200084    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:16.201217    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:18.201556    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:20.710284    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:23.199667    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:25.695535    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:28.193333    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:30.207684    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:32.695898    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:34.696319    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:36.699784    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:39.194170    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:41.202147    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:43.687775    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:45.698215    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:48.186305    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:50.190480    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:52.202306    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:54.690904    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:56.694207    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:25:58.700053    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:01.198642    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:03.689878    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:05.700881    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:07.706864    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:09.710465    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:12.200263    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:14.690585    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:16.698937    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:18.710424    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:21.187500    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:23.195380    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:25.696403    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:27.699436    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:30.190820    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:32.198982    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:34.699761    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:37.183867    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:39.203227    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:41.688842    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:44.197394    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:46.696739    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:48.699477    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:50.704190    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:53.193394    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:55.207542    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:57.696461    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:26:59.697049    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:01.703872    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:04.200726    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:06.691795    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:08.704543    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:11.190597    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:13.191091    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:15.200678    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:17.689802    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:19.694777    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:22.196490    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:24.694882    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:26.699076    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:29.191783    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:31.688894    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:33.697910    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:36.200719    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:38.699807    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:41.190166    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:43.195426    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:45.708431    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:48.195417    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:50.200272    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:52.688160    3060 node_ready.go:58] node "kindnet-20220329190230-1328" has status "Ready":"False"
	I0329 19:27:54.198058    3060 node_ready.go:38] duration metric: took 4m0.0635721s waiting for node "kindnet-20220329190230-1328" to be "Ready" ...
	I0329 19:27:54.201019    3060 out.go:176] 
	W0329 19:27:54.201019    3060 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0329 19:27:54.201019    3060 out.go:241] * 
	* 
	W0329 19:27:54.202021    3060 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0329 19:27:54.206027    3060 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (359.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (359.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.696598s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.617606s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (24.1905934s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6272432s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:31:03.321754    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5766801s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:31:31.133644    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6056637s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5993182s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5850729s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 19:32:30.889898    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6240065s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:33:16.194082    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5718365s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 19:33:22.773117    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:34:13.487214    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:34:16.209239    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5937888s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 19:34:39.365452    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6247907s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (359.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (371.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6212961s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5781992s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:36:03.319101    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6887206s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6279522s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6020425s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7783958s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:37:19.465067    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 19:37:30.885718    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7321829s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.7324286s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 19:38:16.206117    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:38:22.777952    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6173447s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 19:38:54.098568    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.042179    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.057521    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.073252    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.103636    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.150819    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.244100    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.415741    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:05.745493    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:06.396264    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:07.690798    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:10.253807    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:39:13.489573    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:39:15.375954    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:39:16.210044    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 19:39:25.630212    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6087196s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0329 19:39:46.123421    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
E0329 19:40:27.091417    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5743546s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6074034s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (371.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (315.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5636067s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5764883s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5879804s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (20.7435061s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5971876s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (69.7µs)
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0329 19:43:16.204637    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0329 19:43:19.629071    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:19.644573    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:19.659871    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:19.690748    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:19.738242    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:19.832078    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:20.004384    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:20.330042    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:20.976552    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:22.258630    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:43:22.777972    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 19:43:24.838191    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (683.5µs)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0329 19:44:32.872876    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:44:41.666403    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:45:05.580549    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:05.594599    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:05.609827    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:05.640604    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:05.686793    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:05.769952    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:05.941863    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:06.272060    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:06.916126    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:08.204414    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:10.778702    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:15.904236    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:45:26.152343    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0329 19:45:46.641765    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:46:03.329457    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:46:03.600119    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (107.1µs)
net_test.go:169: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:174: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (315.09s)
E0329 19:51:19.377882    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:51:26.820941    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:52:07.782650    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:52:30.886011    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:53:16.198782    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:53:19.628815    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:53:22.778603    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 19:53:29.713894    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\kubenet-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:53:59.478930    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 19:54:05.043534    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:54:13.494564    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:54:16.222439    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.

                                                
                                    

Test pass (237/272)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 19.86
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.37
10 TestDownloadOnly/v1.23.5/json-events 13.26
11 TestDownloadOnly/v1.23.5/preload-exists 0
14 TestDownloadOnly/v1.23.5/kubectl 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.33
17 TestDownloadOnly/v1.23.6-rc.0/json-events 13.64
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
21 TestDownloadOnly/v1.23.6-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.65
23 TestDownloadOnly/DeleteAll 6.31
24 TestDownloadOnly/DeleteAlwaysSucceeds 4.21
25 TestDownloadOnlyKic 50.9
26 TestBinaryMirror 9.66
27 TestOffline 211.99
29 TestAddons/Setup 467.48
33 TestAddons/parallel/MetricsServer 11.16
34 TestAddons/parallel/HelmTiller 30.16
36 TestAddons/parallel/CSI 94.36
38 TestAddons/serial/GCPAuth 24.49
39 TestAddons/StoppedEnableDisable 21.18
40 TestCertOptions 167.56
42 TestDockerFlags 161.85
43 TestForceSystemdFlag 209.63
44 TestForceSystemdEnv 462.73
49 TestErrorSpam/setup 97.61
50 TestErrorSpam/start 11.63
51 TestErrorSpam/status 12.29
52 TestErrorSpam/pause 11.83
53 TestErrorSpam/unpause 12.42
54 TestErrorSpam/stop 26.92
57 TestFunctional/serial/CopySyncFile 0.03
58 TestFunctional/serial/StartWithProxy 109.85
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 20.45
61 TestFunctional/serial/KubeContext 0.22
62 TestFunctional/serial/KubectlGetPods 0.35
65 TestFunctional/serial/CacheCmd/cache/add_remote 15
66 TestFunctional/serial/CacheCmd/cache/add_local 7.02
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.31
68 TestFunctional/serial/CacheCmd/cache/list 0.34
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.96
70 TestFunctional/serial/CacheCmd/cache/cache_reload 16.34
71 TestFunctional/serial/CacheCmd/cache/delete 0.62
72 TestFunctional/serial/MinikubeKubectlCmd 2.5
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.95
74 TestFunctional/serial/ExtraConfig 54.76
75 TestFunctional/serial/ComponentHealth 0.27
76 TestFunctional/serial/LogsCmd 5.95
77 TestFunctional/serial/LogsFileCmd 6.04
79 TestFunctional/parallel/ConfigCmd 1.9
81 TestFunctional/parallel/DryRun 7.06
82 TestFunctional/parallel/InternationalLanguage 3.17
83 TestFunctional/parallel/StatusCmd 12.41
88 TestFunctional/parallel/AddonsCmd 2.82
89 TestFunctional/parallel/PersistentVolumeClaim 55.7
91 TestFunctional/parallel/SSHCmd 8.07
92 TestFunctional/parallel/CpCmd 15.27
93 TestFunctional/parallel/MySQL 79.77
94 TestFunctional/parallel/FileSync 3.92
95 TestFunctional/parallel/CertSync 24.33
99 TestFunctional/parallel/NodeLabels 0.29
101 TestFunctional/parallel/NonActiveRuntimeDisabled 4.05
103 TestFunctional/parallel/ProfileCmd/profile_not_create 6.5
104 TestFunctional/parallel/ProfileCmd/profile_list 4.41
105 TestFunctional/parallel/ProfileCmd/profile_json_output 4.59
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.69
110 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.26
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
116 TestFunctional/parallel/DockerEnv/powershell 16.76
117 TestFunctional/parallel/Version/short 0.35
118 TestFunctional/parallel/Version/components 4.45
119 TestFunctional/parallel/ImageCommands/ImageListShort 3.1
120 TestFunctional/parallel/ImageCommands/ImageListTable 3.16
121 TestFunctional/parallel/ImageCommands/ImageListJson 3.22
122 TestFunctional/parallel/ImageCommands/ImageListYaml 3.14
123 TestFunctional/parallel/ImageCommands/ImageBuild 14.05
124 TestFunctional/parallel/ImageCommands/Setup 3.75
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 15.69
126 TestFunctional/parallel/UpdateContextCmd/no_changes 2.86
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.83
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.86
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 10.9
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 24.48
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 11.69
132 TestFunctional/parallel/ImageCommands/ImageRemove 7.62
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 11.89
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.56
135 TestFunctional/delete_addon-resizer_images 0.01
136 TestFunctional/delete_my-image_image 0.01
137 TestFunctional/delete_minikube_cached_images 0.01
140 TestIngressAddonLegacy/StartLegacyK8sCluster 124.16
142 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 47.17
143 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 3.51
147 TestJSONOutput/start/Command 112.98
148 TestJSONOutput/start/Audit 0
150 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/pause/Command 4.52
154 TestJSONOutput/pause/Audit 0
156 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/unpause/Command 4.21
160 TestJSONOutput/unpause/Audit 0
162 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/stop/Command 15.73
166 TestJSONOutput/stop/Audit 0
168 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
170 TestErrorJSONOutput 4.45
172 TestKicCustomNetwork/create_custom_network 109.3
173 TestKicCustomNetwork/use_default_bridge_network 110.81
174 TestKicExistingNetwork 113.18
175 TestKicCustomSubnet 110.91
176 TestMainNoArgs 0.3
179 TestMountStart/serial/StartWithMountFirst 29.76
180 TestMountStart/serial/VerifyMountFirst 3.85
181 TestMountStart/serial/StartWithMountSecond 30.04
182 TestMountStart/serial/VerifyMountSecond 3.89
183 TestMountStart/serial/DeleteFirst 10.81
184 TestMountStart/serial/VerifyMountPostDelete 3.86
185 TestMountStart/serial/Stop 5.83
186 TestMountStart/serial/RestartStopped 16.95
187 TestMountStart/serial/VerifyMountPostStop 3.83
190 TestMultiNode/serial/FreshStart2Nodes 224.78
191 TestMultiNode/serial/DeployApp2Nodes 26.48
192 TestMultiNode/serial/PingHostFrom2Pods 10.58
193 TestMultiNode/serial/AddNode 101.12
194 TestMultiNode/serial/ProfileList 4.03
195 TestMultiNode/serial/CopyFile 131.51
196 TestMultiNode/serial/StopNode 17.81
197 TestMultiNode/serial/StartAfterStop 42.67
198 TestMultiNode/serial/RestartKeepsNodes 210.42
199 TestMultiNode/serial/DeleteNode 25.71
200 TestMultiNode/serial/StopMultiNode 34.35
201 TestMultiNode/serial/RestartMultiNode 155.17
202 TestMultiNode/serial/ValidateNameConflict 123.62
206 TestPreload 307.09
207 TestScheduledStopWindows 194.86
211 TestInsufficientStorage 80.3
212 TestRunningBinaryUpgrade 298.89
214 TestKubernetesUpgrade 590.38
215 TestMissingContainerUpgrade 429.82
217 TestStoppedBinaryUpgrade/Setup 1.15
218 TestNoKubernetes/serial/StartNoK8sWithVersion 0.41
219 TestNoKubernetes/serial/StartWithK8s 175.99
220 TestStoppedBinaryUpgrade/Upgrade 391.86
221 TestNoKubernetes/serial/StartWithStopK8s 67.52
222 TestNoKubernetes/serial/Start 37.95
223 TestNoKubernetes/serial/VerifyK8sNotRunning 4.63
243 TestStoppedBinaryUpgrade/MinikubeLogs 6.97
245 TestPause/serial/Start 113.73
246 TestPause/serial/SecondStartNoReconfiguration 27.74
247 TestPause/serial/Pause 5.05
248 TestPause/serial/VerifyStatus 4.75
249 TestPause/serial/Unpause 4.99
250 TestPause/serial/PauseAgain 5.27
251 TestPause/serial/DeletePaused 17.18
252 TestPause/serial/VerifyDeletedResources 11.24
253 TestNetworkPlugins/group/auto/Start 157.72
254 TestNetworkPlugins/group/auto/KubeletFlags 4.09
255 TestNetworkPlugins/group/auto/NetCatPod 21.17
257 TestNetworkPlugins/group/auto/DNS 0.57
258 TestNetworkPlugins/group/auto/Localhost 0.58
259 TestNetworkPlugins/group/auto/HairPin 5.59
261 TestNetworkPlugins/group/custom-weave/Start 137.06
262 TestNetworkPlugins/group/custom-weave/KubeletFlags 3.99
263 TestNetworkPlugins/group/custom-weave/NetCatPod 22.04
264 TestNetworkPlugins/group/false/Start 173.1
265 TestNetworkPlugins/group/false/KubeletFlags 5.12
266 TestNetworkPlugins/group/false/NetCatPod 23.88
267 TestNetworkPlugins/group/false/DNS 0.67
268 TestNetworkPlugins/group/false/Localhost 0.63
269 TestNetworkPlugins/group/false/HairPin 5.67
271 TestNetworkPlugins/group/enable-default-cni/Start 369.63
272 TestNetworkPlugins/group/bridge/Start 387.8
273 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 4.24
274 TestNetworkPlugins/group/enable-default-cni/NetCatPod 35
275 TestNetworkPlugins/group/kubenet/Start 667.76
278 TestStartStop/group/old-k8s-version/serial/FirstStart 174.21
279 TestStartStop/group/old-k8s-version/serial/DeployApp 10.31
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 4.44
281 TestStartStop/group/old-k8s-version/serial/Stop 16.17
282 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 4.61
283 TestStartStop/group/old-k8s-version/serial/SecondStart 427
284 TestNetworkPlugins/group/bridge/KubeletFlags 4.76
285 TestNetworkPlugins/group/bridge/NetCatPod 21.89
288 TestStartStop/group/no-preload/serial/FirstStart 430.59
289 TestNetworkPlugins/group/kubenet/KubeletFlags 4.02
290 TestNetworkPlugins/group/kubenet/NetCatPod 20.29
291 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.04
293 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.53
294 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 4.06
295 TestStartStop/group/old-k8s-version/serial/Pause 26.45
297 TestStartStop/group/embed-certs/serial/FirstStart 386.02
299 TestStartStop/group/default-k8s-different-port/serial/FirstStart 374.66
300 TestStartStop/group/no-preload/serial/DeployApp 11.3
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 4.51
302 TestStartStop/group/no-preload/serial/Stop 16.37
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 4.9
304 TestStartStop/group/no-preload/serial/SecondStart 633.58
306 TestStartStop/group/newest-cni/serial/FirstStart 108.96
307 TestStartStop/group/default-k8s-different-port/serial/DeployApp 12.19
308 TestStartStop/group/embed-certs/serial/DeployApp 12.25
309 TestStartStop/group/newest-cni/serial/DeployApp 0
310 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.85
311 TestStartStop/group/newest-cni/serial/Stop 17.17
312 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 4.77
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.81
314 TestStartStop/group/default-k8s-different-port/serial/Stop 17.05
315 TestStartStop/group/embed-certs/serial/Stop 16.24
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 4.7
317 TestStartStop/group/newest-cni/serial/SecondStart 55.72
318 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 4.7
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 4.77
320 TestStartStop/group/default-k8s-different-port/serial/SecondStart 627.33
321 TestStartStop/group/embed-certs/serial/SecondStart 620.57
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 4.71
325 TestStartStop/group/newest-cni/serial/Pause 30.8
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.04
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.68
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 4.4
329 TestStartStop/group/no-preload/serial/Pause 28.93
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.04
331 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.04
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.5
333 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.51
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 3.97
335 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 4.06
336 TestStartStop/group/embed-certs/serial/Pause 26.69
337 TestStartStop/group/default-k8s-different-port/serial/Pause 27.39
x
+
TestDownloadOnly/v1.16.0/json-events (19.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220329171422-1328 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220329171422-1328 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (19.8577796s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (19.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220329171422-1328
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220329171422-1328: exit status 85 (372.4737ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:14:24
	Running on machine: minikube8
	Binary: Built with gc go1.17.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:14:24.308942    1956 out.go:297] Setting OutFile to fd 644 ...
	I0329 17:14:24.355524    1956 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:14:24.355524    1956 out.go:310] Setting ErrFile to fd 648...
	I0329 17:14:24.355524    1956 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W0329 17:14:24.378647    1956 root.go:293] Error reading config file at C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0329 17:14:24.382776    1956 out.go:304] Setting JSON to true
	I0329 17:14:24.386573    1956 start.go:114] hostinfo: {"hostname":"minikube8","uptime":1261,"bootTime":1648572803,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 17:14:24.386738    1956 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 17:14:24.416278    1956 notify.go:193] Checking for updates...
	W0329 17:14:24.416278    1956 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0329 17:14:24.420343    1956 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:14:26.365197    1956 docker.go:137] docker version: linux-20.10.13
	I0329 17:14:26.372355    1956 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:14:27.069906    1956 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:42 SystemTime:2022-03-29 17:14:26.7617643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:14:27.072924    1956 start.go:283] selected driver: docker
	I0329 17:14:27.073021    1956 start.go:800] validating driver "docker" against <nil>
	I0329 17:14:27.105812    1956 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:14:27.798328    1956 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:42 SystemTime:2022-03-29 17:14:27.4626489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:14:27.798798    1956 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 17:14:27.928912    1956 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0329 17:14:27.929575    1956 start_flags.go:819] Wait components to verify : map[apiserver:true system_pods:true]
	I0329 17:14:27.929575    1956 cni.go:93] Creating CNI manager for ""
	I0329 17:14:27.929575    1956 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:14:27.929575    1956 start_flags.go:306] config:
	{Name:download-only-20220329171422-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220329171422-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:14:27.958616    1956 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:14:27.961441    1956 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0329 17:14:27.961599    1956 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:14:28.008759    1956 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0329 17:14:28.008759    1956 cache.go:57] Caching tarball of preloaded images
	I0329 17:14:28.009472    1956 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0329 17:14:28.013003    1956 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:14:28.078927    1956 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.16.0/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:0c23f68e9d9de4489f09a530426fd1e3 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0329 17:14:28.459757    1956 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0329 17:14:28.459757    1956 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1647797120-13815@sha256_90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar
	I0329 17:14:28.459757    1956 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1647797120-13815@sha256_90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar
	I0329 17:14:28.459757    1956 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0329 17:14:28.461461    1956 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0329 17:14:38.843340    1956 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:14:38.843340    1956 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329171422-1328"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (13.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220329171422-1328 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220329171422-1328 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker: (13.262711s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (13.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
--- PASS: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220329171422-1328
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220329171422-1328: exit status 85 (326.6939ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:14:43
	Running on machine: minikube8
	Binary: Built with gc go1.17.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:14:43.018534    7180 out.go:297] Setting OutFile to fd 648 ...
	I0329 17:14:43.072096    7180 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:14:43.072096    7180 out.go:310] Setting ErrFile to fd 560...
	I0329 17:14:43.072096    7180 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W0329 17:14:43.085821    7180 root.go:293] Error reading config file at C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0329 17:14:43.086640    7180 out.go:304] Setting JSON to true
	I0329 17:14:43.089294    7180 start.go:114] hostinfo: {"hostname":"minikube8","uptime":1279,"bootTime":1648572804,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 17:14:43.089294    7180 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 17:14:43.343552    7180 notify.go:193] Checking for updates...
	I0329 17:14:43.360581    7180 config.go:176] Loaded profile config "download-only-20220329171422-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0329 17:14:43.361005    7180 start.go:708] api.Load failed for download-only-20220329171422-1328: filestore "download-only-20220329171422-1328": Docker machine "download-only-20220329171422-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:14:43.361005    7180 driver.go:346] Setting default libvirt URI to qemu:///system
	W0329 17:14:43.361005    7180 start.go:708] api.Load failed for download-only-20220329171422-1328: filestore "download-only-20220329171422-1328": Docker machine "download-only-20220329171422-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:14:45.330599    7180 docker.go:137] docker version: linux-20.10.13
	I0329 17:14:45.338140    7180 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:14:46.042406    7180 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:42 SystemTime:2022-03-29 17:14:45.7186737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:14:46.453972    7180 start.go:283] selected driver: docker
	I0329 17:14:46.453972    7180 start.go:800] validating driver "docker" against &{Name:download-only-20220329171422-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220329171422-1328 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:14:46.474541    7180 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:14:47.135522    7180 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:42 SystemTime:2022-03-29 17:14:46.8150232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:14:47.180312    7180 cni.go:93] Creating CNI manager for ""
	I0329 17:14:47.180312    7180 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:14:47.180312    7180 start_flags.go:306] config:
	{Name:download-only-20220329171422-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220329171422-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:14:47.336486    7180 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:14:47.339220    7180 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:14:47.339291    7180 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:14:47.384291    7180 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 17:14:47.384826    7180 cache.go:57] Caching tarball of preloaded images
	I0329 17:14:47.385055    7180 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 17:14:47.388017    7180 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:14:47.449472    7180 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4?checksum=md5:b4b3d1771f6a934557953d7b31a587d4 -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 17:14:47.793862    7180 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0329 17:14:47.793951    7180 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1647797120-13815@sha256_90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar
	I0329 17:14:47.794244    7180 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1647797120-13815@sha256_90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar
	I0329 17:14:47.794292    7180 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0329 17:14:47.794381    7180 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory, skipping pull
	I0329 17:14:47.794381    7180 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in cache, skipping pull
	I0329 17:14:47.794381    7180 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 as a tarball
	I0329 17:14:52.615746    7180 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:14:52.616748    7180 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329171422-1328"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (13.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220329171422-1328 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220329171422-1328 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker: (13.6350555s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (13.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220329171422-1328
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220329171422-1328: exit status 85 (651.7261ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 17:14:56
	Running on machine: minikube8
	Binary: Built with gc go1.17.7 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 17:14:56.623164    9252 out.go:297] Setting OutFile to fd 728 ...
	I0329 17:14:56.676602    9252 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:14:56.676602    9252 out.go:310] Setting ErrFile to fd 564...
	I0329 17:14:56.677120    9252 out.go:344] TERM=,COLORTERM=, which probably does not support color
	W0329 17:14:56.686307    9252 root.go:293] Error reading config file at C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube8\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0329 17:14:56.686989    9252 out.go:304] Setting JSON to true
	I0329 17:14:56.689735    9252 start.go:114] hostinfo: {"hostname":"minikube8","uptime":1293,"bootTime":1648572803,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 17:14:56.689735    9252 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 17:14:57.010453    9252 notify.go:193] Checking for updates...
	I0329 17:14:57.014794    9252 config.go:176] Loaded profile config "download-only-20220329171422-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	W0329 17:14:57.015036    9252 start.go:708] api.Load failed for download-only-20220329171422-1328: filestore "download-only-20220329171422-1328": Docker machine "download-only-20220329171422-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:14:57.015203    9252 driver.go:346] Setting default libvirt URI to qemu:///system
	W0329 17:14:57.015304    9252 start.go:708] api.Load failed for download-only-20220329171422-1328: filestore "download-only-20220329171422-1328": Docker machine "download-only-20220329171422-1328" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 17:14:58.906917    9252 docker.go:137] docker version: linux-20.10.13
	I0329 17:14:58.915245    9252 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:14:59.593945    9252 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:42 SystemTime:2022-03-29 17:14:59.2829836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:14:59.801488    9252 start.go:283] selected driver: docker
	I0329 17:14:59.801744    9252 start.go:800] validating driver "docker" against &{Name:download-only-20220329171422-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220329171422-1328 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:14:59.824933    9252 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:15:00.496884    9252 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:42 OomKillDisable:true NGoroutines:42 SystemTime:2022-03-29 17:15:00.176106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:15:00.545711    9252 cni.go:93] Creating CNI manager for ""
	I0329 17:15:00.546236    9252 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 17:15:00.546236    9252 start_flags.go:306] config:
	{Name:download-only-20220329171422-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:download-only-20220329171422-1328 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:15:00.551234    9252 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 17:15:00.553803    9252 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0329 17:15:00.553803    9252 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 17:15:00.593413    9252 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.6-rc.0/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0329 17:15:00.593901    9252 cache.go:57] Caching tarball of preloaded images
	I0329 17:15:00.594132    9252 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0329 17:15:00.792324    9252 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:15:00.856564    9252 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.6-rc.0/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d90e40f602d4362984725b3ec643bc0d -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0329 17:15:01.019738    9252 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0329 17:15:01.019802    9252 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1647797120-13815@sha256_90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar
	I0329 17:15:01.020270    9252 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar -> C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1647797120-13815@sha256_90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar
	I0329 17:15:01.020317    9252 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0329 17:15:01.020495    9252 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory, skipping pull
	I0329 17:15:01.020554    9252 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in cache, skipping pull
	I0329 17:15:01.020554    9252 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 as a tarball
	I0329 17:15:06.651439    9252 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 17:15:06.652392    9252 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube8\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329171422-1328"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.65s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (6.31s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:193: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (6.3132954s)
--- PASS: TestDownloadOnly/DeleteAll (6.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (4.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220329171422-1328
aaa_download_only_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220329171422-1328: (4.213962s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (4.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (50.9s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220329171525-1328 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220329171525-1328 --force --alsologtostderr --driver=docker: (44.6934377s)
helpers_test.go:176: Cleaning up "download-docker-20220329171525-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220329171525-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220329171525-1328: (4.750114s)
--- PASS: TestDownloadOnlyKic (50.90s)

                                                
                                    
x
+
TestBinaryMirror (9.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220329171616-1328 --alsologtostderr --binary-mirror http://127.0.0.1:53905 --driver=docker
aaa_download_only_test.go:316: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220329171616-1328 --alsologtostderr --binary-mirror http://127.0.0.1:53905 --driver=docker: (4.8559028s)
helpers_test.go:176: Cleaning up "binary-mirror-20220329171616-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220329171616-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220329171616-1328: (4.5686419s)
--- PASS: TestBinaryMirror (9.66s)

                                                
                                    
x
+
TestOffline (211.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220329185711-1328 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20220329185711-1328 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m10.9132517s)
helpers_test.go:176: Cleaning up "offline-docker-20220329185711-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220329185711-1328

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220329185711-1328: (21.0722349s)
--- PASS: TestOffline (211.99s)

                                                
                                    
x
+
TestAddons/Setup (467.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220329171625-1328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20220329171625-1328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m47.4825937s)
--- PASS: TestAddons/Setup (467.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (11.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 29.5772ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-bd6f4dd56-hcldk" [6209c3c4-f01c-4892-9fea-ede5a07b4461] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0364512s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220329171625-1328 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable metrics-server --alsologtostderr -v=1: (5.7234981s)
--- PASS: TestAddons/parallel/MetricsServer (11.16s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 29.5772ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-7rtjq" [7c7356c0-5114-4010-b269-d0c7ebcab868] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0364512s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220329171625-1328 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220329171625-1328 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (19.5489583s)
addons_test.go:429: kubectl --context addons-20220329171625-1328 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable helm-tiller --alsologtostderr -v=1: (5.5265004s)
--- PASS: TestAddons/parallel/HelmTiller (30.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (94.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 36.6251ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220329171625-1328 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:515: (dbg) Done: kubectl --context addons-20220329171625-1328 create -f testdata\csi-hostpath-driver\pvc.yaml: (2.1916499s)
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220329171625-1328 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220329171625-1328 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [f4c31dd6-0d1c-491d-9565-e415fa49fc9b] Pending
helpers_test.go:343: "task-pv-pod" [f4c31dd6-0d1c-491d-9565-e415fa49fc9b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [f4c31dd6-0d1c-491d-9565-e415fa49fc9b] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 48.1866213s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220329171625-1328 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220329171625-1328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220329171625-1328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220329171625-1328 delete pod task-pv-pod
addons_test.go:545: (dbg) Done: kubectl --context addons-20220329171625-1328 delete pod task-pv-pod: (2.3989454s)
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220329171625-1328 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220329171625-1328 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220329171625-1328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220329171625-1328 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [ab1acc34-276b-4d0f-9f48-388e426093cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [ab1acc34-276b-4d0f-9f48-388e426093cf] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.0281639s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220329171625-1328 delete pod task-pv-pod-restore
addons_test.go:577: (dbg) Done: kubectl --context addons-20220329171625-1328 delete pod task-pv-pod-restore: (1.3613498s)
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220329171625-1328 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220329171625-1328 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable csi-hostpath-driver --alsologtostderr -v=1: (13.4123687s)
addons_test.go:593: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:593: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable volumesnapshots --alsologtostderr -v=1: (5.1234223s)
--- PASS: TestAddons/parallel/CSI (94.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (24.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220329171625-1328 create -f testdata\busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [133df923-4f37-42a4-93b6-3dbdeb66f9d9] Pending
helpers_test.go:343: "busybox" [133df923-4f37-42a4-93b6-3dbdeb66f9d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [133df923-4f37-42a4-93b6-3dbdeb66f9d9] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.0700151s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220329171625-1328 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20220329171625-1328 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220329171625-1328 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220329171625-1328 addons disable gcp-auth --alsologtostderr -v=1: (11.7302131s)
--- PASS: TestAddons/serial/GCPAuth (24.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (21.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20220329171625-1328
addons_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20220329171625-1328: (16.5176362s)
addons_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220329171625-1328
addons_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220329171625-1328: (2.310492s)
addons_test.go:141: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220329171625-1328
addons_test.go:141: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220329171625-1328: (2.3484525s)
--- PASS: TestAddons/StoppedEnableDisable (21.18s)

                                                
                                    
x
+
TestCertOptions (167.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220329191032-1328 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20220329191032-1328 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m51.0969949s)
cert_options_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220329191032-1328 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20220329191032-1328 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (4.6710161s)
cert_options_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220329191032-1328 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-20220329191032-1328 -- "sudo cat /etc/kubernetes/admin.conf": (4.7582215s)
helpers_test.go:176: Cleaning up "cert-options-20220329191032-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220329191032-1328

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220329191032-1328: (46.4828684s)
--- PASS: TestCertOptions (167.56s)

                                                
                                    
x
+
TestDockerFlags (161.85s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220329190750-1328 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20220329190750-1328 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (2m16.0336009s)
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220329190750-1328 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220329190750-1328 ssh "sudo systemctl show docker --property=Environment --no-pager": (4.0763994s)
docker_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220329190750-1328 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220329190750-1328 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (4.2332671s)
helpers_test.go:176: Cleaning up "docker-flags-20220329190750-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220329190750-1328

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220329190750-1328: (17.5035549s)
--- PASS: TestDockerFlags (161.85s)

                                                
                                    
x
+
TestForceSystemdFlag (209.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220329185711-1328 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220329185711-1328 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m55.6145626s)
docker_test.go:105: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220329185711-1328 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:105: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20220329185711-1328 ssh "docker info --format {{.CgroupDriver}}": (5.9834497s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20220329185711-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220329185711-1328

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220329185711-1328: (28.0336343s)
--- PASS: TestForceSystemdFlag (209.63s)

                                                
                                    
x
+
TestForceSystemdEnv (462.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220329190726-1328 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20220329190726-1328 --memory=2048 --alsologtostderr -v=5 --driver=docker: (7m6.7670276s)
docker_test.go:105: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220329190726-1328 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20220329190726-1328 ssh "docker info --format {{.CgroupDriver}}": (4.062344s)
helpers_test.go:176: Cleaning up "force-systemd-env-20220329190726-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220329190726-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220329190726-1328: (31.8986875s)
--- PASS: TestForceSystemdEnv (462.73s)

                                                
                                    
x
+
TestErrorSpam/setup (97.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220329172654-1328 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 --driver=docker
error_spam_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20220329172654-1328 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 --driver=docker: (1m37.6103775s)
error_spam_test.go:89: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5."
--- PASS: TestErrorSpam/setup (97.61s)

                                                
                                    
x
+
TestErrorSpam/start (11.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 start --dry-run: (3.8087835s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 start --dry-run
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 start --dry-run: (3.950875s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 start --dry-run
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 start --dry-run: (3.862293s)
--- PASS: TestErrorSpam/start (11.63s)

                                                
                                    
x
+
TestErrorSpam/status (12.29s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 status: (4.077505s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 status
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 status: (4.0983356s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 status
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 status: (4.1134425s)
--- PASS: TestErrorSpam/status (12.29s)

                                                
                                    
x
+
TestErrorSpam/pause (11.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 pause: (4.4298126s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 pause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 pause: (3.5886426s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 pause
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 pause: (3.8130699s)
--- PASS: TestErrorSpam/pause (11.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (12.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 unpause
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 unpause: (4.4667618s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 unpause
E0329 17:29:13.441685    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:13.457481    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:13.473519    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:13.505651    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:13.551320    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:13.646970    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:13.821044    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:14.142504    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:14.795710    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:16.089145    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 unpause: (4.1094478s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 unpause
E0329 17:29:18.655581    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 unpause: (3.8448404s)
--- PASS: TestErrorSpam/unpause (12.42s)

                                                
                                    
x
+
TestErrorSpam/stop (26.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 stop
E0329 17:29:23.788463    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:29:34.030302    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 stop: (15.9586032s)
error_spam_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 stop
error_spam_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 stop: (5.4586097s)
error_spam_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 stop
error_spam_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220329172654-1328 --log_dir C:\Users\jenkins.minikube8\AppData\Local\Temp\nospam-20220329172654-1328 stop: (5.4971525s)
--- PASS: TestErrorSpam/stop (26.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1796: local sync path: C:\Users\jenkins.minikube8\minikube-integration\.minikube\files\etc\test\nested\copy\1328\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (109.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2178: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0329 17:30:35.485238    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
functional_test.go:2178: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m49.8451139s)
--- PASS: TestFunctional/serial/StartWithProxy (109.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (20.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:656: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --alsologtostderr -v=8
E0329 17:31:57.415883    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
functional_test.go:656: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --alsologtostderr -v=8: (20.4459249s)
functional_test.go:660: soft start took 20.4471786s for "functional-20220329172957-1328" cluster.
--- PASS: TestFunctional/serial/SoftStart (20.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:678: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.22s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:693: (dbg) Run:  kubectl --context functional-20220329172957-1328 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add k8s.gcr.io/pause:3.1
functional_test.go:1046: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add k8s.gcr.io/pause:3.1: (5.2907019s)
functional_test.go:1046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add k8s.gcr.io/pause:3.3
functional_test.go:1046: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add k8s.gcr.io/pause:3.3: (4.6999564s)
functional_test.go:1046: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add k8s.gcr.io/pause:latest
functional_test.go:1046: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add k8s.gcr.io/pause:latest: (5.0069705s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (15.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (7.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220329172957-1328 C:\Users\jenkins.minikube8\AppData\Local\Temp\functional-20220329172957-13282543506888
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-20220329172957-1328 C:\Users\jenkins.minikube8\AppData\Local\Temp\functional-20220329172957-13282543506888: (1.8061429s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add minikube-local-cache-test:functional-20220329172957-1328
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache add minikube-local-cache-test:functional-20220329172957-1328: (4.3018135s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache delete minikube-local-cache-test:functional-20220329172957-1328
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220329172957-1328
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (7.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo crictl images
functional_test.go:1124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo crictl images: (3.9634125s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (16.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo docker rmi k8s.gcr.io/pause:latest: (4.000363s)
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (3.9374739s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cache reload: (4.4456651s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1163: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (3.9553546s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (16.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.62s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:713: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 kubectl -- --context functional-20220329172957-1328 get pods
functional_test.go:713: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 kubectl -- --context functional-20220329172957-1328 get pods: (2.4985865s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:738: (dbg) Run:  out\kubectl.exe --context functional-20220329172957-1328 get pods
functional_test.go:738: (dbg) Done: out\kubectl.exe --context functional-20220329172957-1328 get pods: (1.9427861s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.95s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:754: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:754: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.7630693s)
functional_test.go:758: restart took 54.7633255s for "functional-20220329172957-1328" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (54.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:807: (dbg) Run:  kubectl --context functional-20220329172957-1328 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:822: etcd phase: Running
functional_test.go:832: etcd status: Ready
functional_test.go:822: kube-apiserver phase: Running
functional_test.go:832: kube-apiserver status: Ready
functional_test.go:822: kube-controller-manager phase: Running
functional_test.go:832: kube-controller-manager status: Ready
functional_test.go:822: kube-scheduler phase: Running
functional_test.go:832: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (5.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 logs: (5.9479997s)
--- PASS: TestFunctional/serial/LogsCmd (5.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1253: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 logs --file C:\Users\jenkins.minikube8\AppData\Local\Temp\functional-20220329172957-13283119261649\logs.txt
functional_test.go:1253: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 logs --file C:\Users\jenkins.minikube8\AppData\Local\Temp\functional-20220329172957-13283119261649\logs.txt: (6.0429744s)
--- PASS: TestFunctional/serial/LogsFileCmd (6.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config get cpus: exit status 14 (310.7749ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 config get cpus: exit status 14 (299.3788ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (7.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (3.2026364s)

                                                
                                                
-- stdout --
	* [functional-20220329172957-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 17:34:09.012152    7780 out.go:297] Setting OutFile to fd 720 ...
	I0329 17:34:09.068388    7780 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:09.068388    7780 out.go:310] Setting ErrFile to fd 988...
	I0329 17:34:09.068388    7780 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:09.092253    7780 out.go:304] Setting JSON to false
	I0329 17:34:09.098776    7780 start.go:114] hostinfo: {"hostname":"minikube8","uptime":2445,"bootTime":1648572804,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 17:34:09.098776    7780 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 17:34:09.107144    7780 out.go:176] * [functional-20220329172957-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 17:34:09.109132    7780 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 17:34:09.112146    7780 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 17:34:09.117139    7780 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:34:09.121137    7780 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:34:09.124138    7780 config.go:176] Loaded profile config "functional-20220329172957-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:34:09.125132    7780 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:34:11.165879    7780 docker.go:137] docker version: linux-20.10.13
	I0329 17:34:11.172871    7780 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:11.923056    7780 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:57 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 17:34:11.5658954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:11.932125    7780 out.go:176] * Using the docker driver based on existing profile
	I0329 17:34:11.932125    7780 start.go:283] selected driver: docker
	I0329 17:34:11.932125    7780 start.go:800] validating driver "docker" against &{Name:functional-20220329172957-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329172957-1328 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:34:11.933100    7780 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 17:34:11.986070    7780 out.go:176] 
	W0329 17:34:11.986070    7780 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0329 17:34:11.993078    7780 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --dry-run --alsologtostderr -v=1 --driver=docker: (3.8551017s)
--- PASS: TestFunctional/parallel/DryRun (7.06s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1017: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1017: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220329172957-1328 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (3.1728724s)

                                                
                                                
-- stdout --
	* [functional-20220329172957-1328] minikube v1.25.2 sur Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 17:34:05.833536    4204 out.go:297] Setting OutFile to fd 664 ...
	I0329 17:34:05.895539    4204 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:05.895539    4204 out.go:310] Setting ErrFile to fd 684...
	I0329 17:34:05.895539    4204 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 17:34:05.908544    4204 out.go:304] Setting JSON to false
	I0329 17:34:05.910533    4204 start.go:114] hostinfo: {"hostname":"minikube8","uptime":2442,"bootTime":1648572803,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"907a2b8c-8800-4f4e-912a-028cf331db55"}
	W0329 17:34:05.910533    4204 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 17:34:05.914540    4204 out.go:176] * [functional-20220329172957-1328] minikube v1.25.2 sur Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0329 17:34:05.918533    4204 out.go:176]   - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	I0329 17:34:05.921534    4204 out.go:176]   - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	I0329 17:34:05.929536    4204 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 17:34:05.933530    4204 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 17:34:05.934533    4204 config.go:176] Loaded profile config "functional-20220329172957-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 17:34:05.935548    4204 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 17:34:08.028598    4204 docker.go:137] docker version: linux-20.10.13
	I0329 17:34:08.035598    4204 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 17:34:08.707431    4204 info.go:263] docker info: {ID:EWJC:D32H:QDOV:Q37U:7NCG:FSEF:BHRI:5KZE:BNL5:7NRS:WK2R:WXHN Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:57 OomKillDisable:true NGoroutines:49 SystemTime:2022-03-29 17:34:08.3999401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:2 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.0] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0329 17:34:08.716425    4204 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0329 17:34:08.716425    4204 start.go:283] selected driver: docker
	I0329 17:34:08.716425    4204 start.go:800] validating driver "docker" against &{Name:functional-20220329172957-1328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329172957-1328 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube8:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 17:34:08.716425    4204 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 17:34:08.777653    4204 out.go:176] 
	W0329 17:34:08.778149    4204 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0329 17:34:08.784595    4204 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (12.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:851: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:851: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 status: (4.0938285s)
functional_test.go:857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:857: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (4.2308841s)
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 status -o json
E0329 17:34:13.448907    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 status -o json: (4.0900609s)
--- PASS: TestFunctional/parallel/StatusCmd (12.41s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1630: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1630: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 addons list: (2.4896063s)
functional_test.go:1642: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [1dcf6935-5ddb-49f1-9f87-b38837c83ea2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0398822s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220329172957-1328 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220329172957-1328 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329172957-1328 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220329172957-1328 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [3d39a4d8-25ef-40c8-aeee-26299d40c667] Pending
helpers_test.go:343: "sp-pod" [3d39a4d8-25ef-40c8-aeee-26299d40c667] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [3d39a4d8-25ef-40c8-aeee-26299d40c667] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 37.0311201s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220329172957-1328 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220329172957-1328 delete -f testdata/storage-provisioner/pod.yaml: (1.5134804s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220329172957-1328 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [22556e39-7908-44a3-bab6-4201cde99f79] Pending
helpers_test.go:343: "sp-pod" [22556e39-7908-44a3-bab6-4201cde99f79] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [22556e39-7908-44a3-bab6-4201cde99f79] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0650775s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1665: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1665: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "echo hello": (4.0318232s)
functional_test.go:1682: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1682: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "cat /etc/hostname": (4.0348688s)
--- PASS: TestFunctional/parallel/SSHCmd (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (15.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cp testdata\cp-test.txt /home/docker/cp-test.txt: (3.4133264s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh -n functional-20220329172957-1328 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh -n functional-20220329172957-1328 "sudo cat /home/docker/cp-test.txt": (4.0135893s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cp functional-20220329172957-1328:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_test4014570281\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 cp functional-20220329172957-1328:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_test4014570281\cp-test.txt: (3.8751022s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh -n functional-20220329172957-1328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh -n functional-20220329172957-1328 "sudo cat /home/docker/cp-test.txt": (3.9700655s)
--- PASS: TestFunctional/parallel/CpCmd (15.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (79.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-20220329172957-1328 replace --force -f testdata\mysql.yaml
functional_test.go:1734: (dbg) Done: kubectl --context functional-20220329172957-1328 replace --force -f testdata\mysql.yaml: (1.0007011s)
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-b87c45988-p2gxr" [4866b3db-09ff-453d-85ba-cd7df98a719d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-p2gxr" [4866b3db-09ff-453d-85ba-cd7df98a719d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 51.0289593s
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;": exit status 1 (624.4936ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;": exit status 1 (697.8265ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;": exit status 1 (844.6151ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;": exit status 1 (578.1373ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;": exit status 1 (741.232ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;": exit status 1 (785.0863ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329172957-1328 exec mysql-b87c45988-p2gxr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (79.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1870: Checking for existence of /etc/test/nested/copy/1328/hosts within VM
functional_test.go:1872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/test/nested/copy/1328/hosts"
functional_test.go:1872: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/test/nested/copy/1328/hosts": (3.9175737s)
functional_test.go:1877: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (24.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /etc/ssl/certs/1328.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/1328.pem"
functional_test.go:1914: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/1328.pem": (4.4066726s)
functional_test.go:1913: Checking for existence of /usr/share/ca-certificates/1328.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /usr/share/ca-certificates/1328.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1914: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /usr/share/ca-certificates/1328.pem": (3.9695057s)
functional_test.go:1913: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1914: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1914: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/51391683.0": (3.9819141s)
functional_test.go:1940: Checking for existence of /etc/ssl/certs/13282.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/13282.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1941: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/13282.pem": (3.8815382s)
functional_test.go:1940: Checking for existence of /usr/share/ca-certificates/13282.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /usr/share/ca-certificates/13282.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1941: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /usr/share/ca-certificates/13282.pem": (3.9765252s)
functional_test.go:1940: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1941: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1941: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (4.1083193s)
--- PASS: TestFunctional/parallel/CertSync (24.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20220329172957-1328 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh "sudo systemctl is-active crio": exit status 1 (4.0517971s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1276: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1276: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (2.4091868s)
functional_test.go:1281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.0880356s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1316: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1316: (dbg) Done: out/minikube-windows-amd64.exe profile list: (4.0949884s)
functional_test.go:1321: Took "4.0949884s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1330: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1335: Took "318.4013ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (4.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1367: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1367: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (4.2137739s)
functional_test.go:1372: Took "4.2137739s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1380: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1385: Took "371.6288ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220329172957-1328 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220329172957-1328 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [915f2b48-1f46-454c-a14c-d79f10b434aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [915f2b48-1f46-454c-a14c-d79f10b434aa] Running
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.1714446s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220329172957-1328 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220329172957-1328 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to kill pid 4604: TerminateProcess: Access is denied.
helpers_test.go:507: unable to kill pid 2016: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (16.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:496: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220329172957-1328 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220329172957-1328"
functional_test.go:496: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220329172957-1328 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220329172957-1328": (10.922893s)
functional_test.go:519: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220329172957-1328 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:519: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220329172957-1328 docker-env | Invoke-Expression ; docker images": (5.8247407s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (16.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2200: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 version --short
--- PASS: TestFunctional/parallel/Version/short (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (4.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2214: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 version -o=json --components: (4.4549126s)
--- PASS: TestFunctional/parallel/Version/components (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format short: (3.0950815s)
functional_test.go:263: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220329172957-1328
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format table
functional_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format table: (3.1597247s)
functional_test.go:263: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/nginx                     | alpine                         | 53722defe6278 | 23.4MB |
| k8s.gcr.io/kube-proxy                       | v1.23.5                        | 3c53fa8541f95 | 112MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| gcr.io/google-containers/addon-resizer      | functional-20220329172957-1328 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-20220329172957-1328 | b40ef9168bc9d | 30B    |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| docker.io/kubernetesui/dashboard            | v2.3.1                         | e1482a24335a6 | 220MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest                         | 12766a6745eea | 142MB  |
| docker.io/library/mysql                     | 5.7                            | 05311a87aeb4d | 450MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                        | 3fc1d62d65872 | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                        | 884d49d6d8c9f | 53.5MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                        | b0c9e5e4dbb14 | 125MB  |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | 7801cfc6d5c07 | 34.4MB |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format json: (3.2170888s)
functional_test.go:263: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format json:
[{"id":"05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"450000000"},{"id":"3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"135000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220329172957-1328"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"53722defe627853c4f67a743b54246916074a824bc93bc7e05f452c6929374bf","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c
123b15619","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"53500000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"b40ef9168bc9d9fbb0f5f8ad3bb3280c98f8320fdee3e97823cf683b01e0264f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220329172957-1328"],"size":"30"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-mi
nikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"125000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format yaml: (3.1385563s)
functional_test.go:263: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls --format yaml:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 53722defe627853c4f67a743b54246916074a824bc93bc7e05f452c6929374bf
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: b40ef9168bc9d9fbb0f5f8ad3bb3280c98f8320fdee3e97823cf683b01e0264f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220329172957-1328
size: "30"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: 12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "450000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (14.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 ssh pgrep buildkitd: exit status 1 (4.0376402s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image build -t localhost/my-image:functional-20220329172957-1328 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image build -t localhost/my-image:functional-20220329172957-1328 testdata\build: (6.9787759s)
functional_test.go:317: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image build -t localhost/my-image:functional-20220329172957-1328 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in f110e9e604d9
Removing intermediate container f110e9e604d9
---> 76a7ae5fc1c2
Step 3/3 : ADD content.txt /
---> d91fbf327bdf
Successfully built d91fbf327bdf
Successfully tagged localhost/my-image:functional-20220329172957-1328
functional_test.go:445: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls
functional_test.go:445: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls: (3.0364686s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (14.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.1894882s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (15.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328: (12.6644768s)
functional_test.go:445: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls
functional_test.go:445: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls: (3.0282598s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (15.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2060: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2060: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 update-context --alsologtostderr -v=2: (2.8578801s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2060: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2060: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 update-context --alsologtostderr -v=2: (2.8314599s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2060: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2060: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 update-context --alsologtostderr -v=2: (2.8613195s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328: (7.827896s)
functional_test.go:445: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:445: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls: (3.071534s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.03694s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
functional_test.go:242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:242: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328: (17.5307562s)
functional_test.go:445: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls
functional_test.go:445: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls: (3.3966063s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image save gcr.io/google-containers/addon-resizer:functional-20220329172957-1328 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image save gcr.io/google-containers/addon-resizer:functional-20220329172957-1328 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (11.6905566s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image rm gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
functional_test.go:389: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image rm gcr.io/google-containers/addon-resizer:functional-20220329172957-1328: (3.9099882s)
functional_test.go:445: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls
functional_test.go:445: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls: (3.7101072s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (11.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (8.8662732s)
functional_test.go:445: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:445: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image ls: (3.0227019s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (11.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
functional_test.go:421: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:421: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220329172957-1328 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220329172957-1328: (9.4244481s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:189: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220329172957-1328
functional_test.go:187: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220329172957-1328: context deadline exceeded (0s)
functional_test.go:189: failed to remove image "gcr.io/google-containers/addon-resizer:functional-20220329172957-1328" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220329172957-1328": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220329172957-1328
functional_test.go:195: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-20220329172957-1328: context deadline exceeded (0s)
functional_test.go:197: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-20220329172957-1328": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220329172957-1328
functional_test.go:203: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-20220329172957-1328: context deadline exceeded (0s)
functional_test.go:205: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-20220329172957-1328": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (124.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220329181027-1328 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220329181027-1328 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m4.1615029s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (124.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (47.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220329181027-1328 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220329181027-1328 addons enable ingress --alsologtostderr -v=5: (47.173362s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (47.17s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (3.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220329181027-1328 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220329181027-1328 addons enable ingress-dns --alsologtostderr -v=5: (3.5099308s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (3.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (112.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220329181420-1328 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0329 18:14:21.414004    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 18:14:26.544639    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 18:14:36.793820    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 18:14:57.292365    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 18:15:38.259829    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20220329181420-1328 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m52.9827438s)
--- PASS: TestJSONOutput/start/Command (112.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (4.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220329181420-1328 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20220329181420-1328 --output=json --user=testUser: (4.5233062s)
--- PASS: TestJSONOutput/pause/Command (4.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (4.21s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220329181420-1328 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20220329181420-1328 --output=json --user=testUser: (4.2058325s)
--- PASS: TestJSONOutput/unpause/Command (4.21s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (15.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220329181420-1328 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20220329181420-1328 --output=json --user=testUser: (15.72844s)
--- PASS: TestJSONOutput/stop/Command (15.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (4.45s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220329181647-1328 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220329181647-1328 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (293.2595ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ecf0f6a0-03f7-4e4c-9e37-4e1e0d34ae2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220329181647-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3dea9d0-8043-4073-a5a8-8391c2a7508c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube8\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"82ae88d6-eb0a-4b52-b672-4d8f88062bae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"1c9ddbf4-7c15-46b0-bfdb-c6831b4efd2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13730"}}
	{"specversion":"1.0","id":"e8b761fb-5e92-49af-a33c-d56c94f80f27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e00ae78d-e6f3-4258-af08-7dab2b9f94ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220329181647-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220329181647-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220329181647-1328: (4.1527511s)
--- PASS: TestErrorJSONOutput (4.45s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (109.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220329181651-1328 --network=
E0329 18:17:00.188617    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 18:18:22.750061    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:22.765501    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:22.780352    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:22.811924    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:22.858616    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:22.952656    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:23.124418    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:23.450530    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:24.103392    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:25.391957    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:27.956438    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220329181651-1328 --network=: (1m36.9486236s)
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220329181651-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220329181651-1328
E0329 18:18:33.091449    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220329181651-1328: (11.8539434s)
--- PASS: TestKicCustomNetwork/create_custom_network (109.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (110.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220329181841-1328 --network=bridge
E0329 18:18:43.339398    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:18:56.652300    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:19:03.820250    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:19:13.462307    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:19:16.194052    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 18:19:44.034535    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 18:19:44.780769    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220329181841-1328 --network=bridge: (1m38.8535918s)
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220329181841-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220329181841-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220329181841-1328: (11.4593774s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (110.81s)

                                                
                                    
x
+
TestKicExistingNetwork (113.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20220329182037-1328 --network=existing-network
E0329 18:21:06.715415    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:94: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20220329182037-1328 --network=existing-network: (1m34.7827971s)
helpers_test.go:176: Cleaning up "existing-network-20220329182037-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20220329182037-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20220329182037-1328: (11.4483164s)
--- PASS: TestKicExistingNetwork (113.18s)

                                                
                                    
x
+
TestKicCustomSubnet (110.91s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:113: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220329182225-1328 --subnet=192.168.60.0/24
E0329 18:23:22.752580    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:23:50.557037    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:113: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220329182225-1328 --subnet=192.168.60.0/24: (1m38.3606356s)
kic_custom_network_test.go:134: (dbg) Run:  docker network inspect custom-subnet-20220329182225-1328 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-20220329182225-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220329182225-1328
E0329 18:24:13.464282    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220329182225-1328: (12.0683066s)
--- PASS: TestKicCustomSubnet (110.91s)

                                                
                                    
x
+
TestMainNoArgs (0.3s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
E0329 18:24:16.185888    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
--- PASS: TestMainNoArgs (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220329182416-1328 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-20220329182416-1328 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (28.7467569s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (3.85s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20220329182416-1328 ssh -- ls /minikube-host
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-20220329182416-1328 ssh -- ls /minikube-host: (3.8522597s)
--- PASS: TestMountStart/serial/VerifyMountFirst (3.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220329182416-1328 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220329182416-1328 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (29.0304785s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (3.89s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220329182416-1328 ssh -- ls /minikube-host
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220329182416-1328 ssh -- ls /minikube-host: (3.8907414s)
--- PASS: TestMountStart/serial/VerifyMountSecond (3.89s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (10.81s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20220329182416-1328 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20220329182416-1328 --alsologtostderr -v=5: (10.8063409s)
--- PASS: TestMountStart/serial/DeleteFirst (10.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (3.86s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220329182416-1328 ssh -- ls /minikube-host
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220329182416-1328 ssh -- ls /minikube-host: (3.8593827s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (3.86s)

                                                
                                    
x
+
TestMountStart/serial/Stop (5.83s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20220329182416-1328
mount_start_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-20220329182416-1328: (5.834316s)
--- PASS: TestMountStart/serial/Stop (5.83s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220329182416-1328
mount_start_test.go:167: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220329182416-1328: (15.94419s)
--- PASS: TestMountStart/serial/RestartStopped (16.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (3.83s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220329182416-1328 ssh -- ls /minikube-host
mount_start_test.go:115: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220329182416-1328 ssh -- ls /minikube-host: (3.8313925s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (3.83s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (224.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0329 18:28:22.759035    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:29:13.459981    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:29:16.195985    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (3m39.0864749s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr: (5.6911144s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (224.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (26.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.5783375s)
multinode_test.go:491: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- rollout status deployment/busybox: (4.6611264s)
multinode_test.go:497: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:497: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- get pods -o jsonpath='{.items[*].status.podIP}': (1.9008616s)
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:509: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9507433s)
multinode_test.go:517: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- nslookup kubernetes.io: (3.5230273s)
multinode_test.go:517: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- nslookup kubernetes.io: (3.1958669s)
multinode_test.go:527: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- nslookup kubernetes.default: (2.1991042s)
multinode_test.go:527: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- nslookup kubernetes.default: (2.2006381s)
multinode_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- nslookup kubernetes.default.svc.cluster.local: (2.1430889s)
multinode_test.go:535: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- nslookup kubernetes.default.svc.cluster.local: (2.1254925s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (26.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (10.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:545: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9314753s)
multinode_test.go:553: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:553: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.1666633s)
multinode_test.go:561: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:561: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-hhkmf -- sh -c "ping -c 1 192.168.65.2": (2.1955321s)
multinode_test.go:553: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:553: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.1573634s)
multinode_test.go:561: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- sh -c "ping -c 1 192.168.65.2"
E0329 18:30:39.401148    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
multinode_test.go:561: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220329182619-1328 -- exec busybox-7978565885-sbfhq -- sh -c "ping -c 1 192.168.65.2": (2.1230652s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (10.58s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (101.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220329182619-1328 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20220329182619-1328 -v 3 --alsologtostderr: (1m33.7239299s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr: (7.3924707s)
--- PASS: TestMultiNode/serial/AddNode (101.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (4.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.0251808s)
--- PASS: TestMultiNode/serial/ProfileList (4.03s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (131.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --output json --alsologtostderr: (7.4897067s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp testdata\cp-test.txt multinode-20220329182619-1328:/home/docker/cp-test.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp testdata\cp-test.txt multinode-20220329182619-1328:/home/docker/cp-test.txt: (3.9176805s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt": (3.8302757s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_cp_test2673936643\cp-test_multinode-20220329182619-1328.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_cp_test2673936643\cp-test_multinode-20220329182619-1328.txt: (3.9418165s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt": (3.8646199s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328:/home/docker/cp-test.txt multinode-20220329182619-1328-m02:/home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m02.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328:/home/docker/cp-test.txt multinode-20220329182619-1328-m02:/home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m02.txt: (5.0044788s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt": (3.8858844s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m02.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m02.txt": (3.8803117s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328:/home/docker/cp-test.txt multinode-20220329182619-1328-m03:/home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m03.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328:/home/docker/cp-test.txt multinode-20220329182619-1328-m03:/home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m03.txt: (5.0820925s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test.txt": (3.9035609s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m03.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328_multinode-20220329182619-1328-m03.txt": (3.8049212s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp testdata\cp-test.txt multinode-20220329182619-1328-m02:/home/docker/cp-test.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp testdata\cp-test.txt multinode-20220329182619-1328-m02:/home/docker/cp-test.txt: (3.8530314s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt": (3.9335426s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_cp_test2673936643\cp-test_multinode-20220329182619-1328-m02.txt
E0329 18:33:22.750278    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_cp_test2673936643\cp-test_multinode-20220329182619-1328-m02.txt: (3.8627355s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt": (3.8700062s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m02:/home/docker/cp-test.txt multinode-20220329182619-1328:/home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m02:/home/docker/cp-test.txt multinode-20220329182619-1328:/home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328.txt: (5.1154875s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt": (3.9339996s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328.txt": (3.935773s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m02:/home/docker/cp-test.txt multinode-20220329182619-1328-m03:/home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328-m03.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m02:/home/docker/cp-test.txt multinode-20220329182619-1328-m03:/home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328-m03.txt: (5.0830693s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test.txt": (3.8995558s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328-m03.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m02_multinode-20220329182619-1328-m03.txt": (3.9690488s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp testdata\cp-test.txt multinode-20220329182619-1328-m03:/home/docker/cp-test.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp testdata\cp-test.txt multinode-20220329182619-1328-m03:/home/docker/cp-test.txt: (3.9599656s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt": (3.9519602s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_cp_test2673936643\cp-test_multinode-20220329182619-1328-m03.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube8\AppData\Local\Temp\mk_cp_test2673936643\cp-test_multinode-20220329182619-1328-m03.txt: (3.8000837s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt": (3.8726267s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m03:/home/docker/cp-test.txt multinode-20220329182619-1328:/home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328.txt
E0329 18:34:13.467435    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:34:16.194389    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m03:/home/docker/cp-test.txt multinode-20220329182619-1328:/home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328.txt: (5.0754019s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt": (4.0026718s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328.txt": (3.8914259s)
helpers_test.go:555: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m03:/home/docker/cp-test.txt multinode-20220329182619-1328-m02:/home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328-m02.txt
helpers_test.go:555: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 cp multinode-20220329182619-1328-m03:/home/docker/cp-test.txt multinode-20220329182619-1328-m02:/home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328-m02.txt: (5.0369592s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m03 "sudo cat /home/docker/cp-test.txt": (3.9672243s)
helpers_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328-m02.txt"
helpers_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 ssh -n multinode-20220329182619-1328-m02 "sudo cat /home/docker/cp-test_multinode-20220329182619-1328-m03_multinode-20220329182619-1328-m02.txt": (3.8855015s)
--- PASS: TestMultiNode/serial/CopyFile (131.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (17.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 node stop m03: (5.2186113s)
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status
E0329 18:34:45.928372    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status: exit status 7 (6.2368011s)

                                                
                                                
-- stdout --
	multinode-20220329182619-1328
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220329182619-1328-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220329182619-1328-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr: exit status 7 (6.3526467s)

                                                
                                                
-- stdout --
	multinode-20220329182619-1328
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220329182619-1328-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220329182619-1328-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 18:34:49.170667    6308 out.go:297] Setting OutFile to fd 516 ...
	I0329 18:34:49.237832    6308 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:34:49.237832    6308 out.go:310] Setting ErrFile to fd 700...
	I0329 18:34:49.237832    6308 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:34:49.258566    6308 out.go:304] Setting JSON to false
	I0329 18:34:49.258674    6308 mustload.go:65] Loading cluster: multinode-20220329182619-1328
	I0329 18:34:49.259734    6308 config.go:176] Loaded profile config "multinode-20220329182619-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:34:49.259793    6308 status.go:253] checking status of multinode-20220329182619-1328 ...
	I0329 18:34:49.276535    6308 cli_runner.go:133] Run: docker container inspect multinode-20220329182619-1328 --format={{.State.Status}}
	I0329 18:34:51.205911    6308 cli_runner.go:186] Completed: docker container inspect multinode-20220329182619-1328 --format={{.State.Status}}: (1.929217s)
	I0329 18:34:51.206120    6308 status.go:328] multinode-20220329182619-1328 host status = "Running" (err=<nil>)
	I0329 18:34:51.206201    6308 host.go:66] Checking if "multinode-20220329182619-1328" exists ...
	I0329 18:34:51.214180    6308 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329182619-1328
	I0329 18:34:51.706929    6308 host.go:66] Checking if "multinode-20220329182619-1328" exists ...
	I0329 18:34:51.720604    6308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 18:34:51.727677    6308 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329182619-1328
	I0329 18:34:52.235101    6308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55469 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220329182619-1328\id_rsa Username:docker}
	I0329 18:34:52.375111    6308 ssh_runner.go:195] Run: systemctl --version
	I0329 18:34:52.409046    6308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 18:34:52.446054    6308 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220329182619-1328
	I0329 18:34:52.933242    6308 kubeconfig.go:92] found "multinode-20220329182619-1328" server: "https://127.0.0.1:55473"
	I0329 18:34:52.933242    6308 api_server.go:165] Checking apiserver status ...
	I0329 18:34:52.944494    6308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 18:34:52.999617    6308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1646/cgroup
	I0329 18:34:53.032138    6308 api_server.go:181] apiserver freezer: "20:freezer:/docker/4774bfeb2392bcc2f2c01db8e0e0870f3833048478888f7f5e734f780b868551/kubepods/burstable/podba2705fcb96409fe45b37bc4d10373e9/8473b05c40da2370c2a5328aac549dda826c7004c8d9970de8a218fee2ea8c82"
	I0329 18:34:53.045901    6308 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4774bfeb2392bcc2f2c01db8e0e0870f3833048478888f7f5e734f780b868551/kubepods/burstable/podba2705fcb96409fe45b37bc4d10373e9/8473b05c40da2370c2a5328aac549dda826c7004c8d9970de8a218fee2ea8c82/freezer.state
	I0329 18:34:53.078700    6308 api_server.go:203] freezer state: "THAWED"
	I0329 18:34:53.078775    6308 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55473/healthz ...
	I0329 18:34:53.096600    6308 api_server.go:266] https://127.0.0.1:55473/healthz returned 200:
	ok
	I0329 18:34:53.096600    6308 status.go:419] multinode-20220329182619-1328 apiserver status = Running (err=<nil>)
	I0329 18:34:53.096600    6308 status.go:255] multinode-20220329182619-1328 status: &{Name:multinode-20220329182619-1328 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0329 18:34:53.096600    6308 status.go:253] checking status of multinode-20220329182619-1328-m02 ...
	I0329 18:34:53.112659    6308 cli_runner.go:133] Run: docker container inspect multinode-20220329182619-1328-m02 --format={{.State.Status}}
	I0329 18:34:53.601197    6308 status.go:328] multinode-20220329182619-1328-m02 host status = "Running" (err=<nil>)
	I0329 18:34:53.601197    6308 host.go:66] Checking if "multinode-20220329182619-1328-m02" exists ...
	I0329 18:34:53.609056    6308 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329182619-1328-m02
	I0329 18:34:54.086981    6308 host.go:66] Checking if "multinode-20220329182619-1328-m02" exists ...
	I0329 18:34:54.097824    6308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 18:34:54.105042    6308 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329182619-1328-m02
	I0329 18:34:54.614599    6308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55528 SSHKeyPath:C:\Users\jenkins.minikube8\minikube-integration\.minikube\machines\multinode-20220329182619-1328-m02\id_rsa Username:docker}
	I0329 18:34:54.789042    6308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 18:34:54.819845    6308 status.go:255] multinode-20220329182619-1328-m02 status: &{Name:multinode-20220329182619-1328-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0329 18:34:54.819845    6308 status.go:253] checking status of multinode-20220329182619-1328-m03 ...
	I0329 18:34:54.838541    6308 cli_runner.go:133] Run: docker container inspect multinode-20220329182619-1328-m03 --format={{.State.Status}}
	I0329 18:34:55.299555    6308 status.go:328] multinode-20220329182619-1328-m03 host status = "Stopped" (err=<nil>)
	I0329 18:34:55.299619    6308 status.go:341] host is not running, skipping remaining checks
	I0329 18:34:55.299619    6308 status.go:255] multinode-20220329182619-1328-m03 status: &{Name:multinode-20220329182619-1328-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (17.81s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 node start m03 --alsologtostderr: (34.5367918s)
multinode_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status
E0329 18:35:36.669146    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
multinode_test.go:266: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status: (7.3829173s)
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (210.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220329182619-1328
multinode_test.go:295: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220329182619-1328
multinode_test.go:295: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20220329182619-1328: (32.653656s)
multinode_test.go:300: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328 --wait=true -v=8 --alsologtostderr
E0329 18:38:22.759305    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
multinode_test.go:300: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328 --wait=true -v=8 --alsologtostderr: (2m57.1530271s)
multinode_test.go:305: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220329182619-1328
--- PASS: TestMultiNode/serial/RestartKeepsNodes (210.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (25.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 node delete m03
E0329 18:39:13.472391    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:39:16.193407    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
multinode_test.go:399: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 node delete m03: (18.8938545s)
multinode_test.go:405: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr
multinode_test.go:405: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr: (5.7154781s)
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (25.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (34.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 stop
multinode_test.go:319: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 stop: (28.9580499s)
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status: exit status 7 (2.7007368s)

                                                
                                                
-- stdout --
	multinode-20220329182619-1328
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220329182619-1328-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr: exit status 7 (2.6894904s)

                                                
                                                
-- stdout --
	multinode-20220329182619-1328
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220329182619-1328-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 18:40:06.009409    7444 out.go:297] Setting OutFile to fd 516 ...
	I0329 18:40:06.064376    7444 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:40:06.064376    7444 out.go:310] Setting ErrFile to fd 752...
	I0329 18:40:06.064376    7444 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 18:40:06.074618    7444 out.go:304] Setting JSON to false
	I0329 18:40:06.074618    7444 mustload.go:65] Loading cluster: multinode-20220329182619-1328
	I0329 18:40:06.074884    7444 config.go:176] Loaded profile config "multinode-20220329182619-1328": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 18:40:06.074884    7444 status.go:253] checking status of multinode-20220329182619-1328 ...
	I0329 18:40:06.090938    7444 cli_runner.go:133] Run: docker container inspect multinode-20220329182619-1328 --format={{.State.Status}}
	I0329 18:40:07.963533    7444 cli_runner.go:186] Completed: docker container inspect multinode-20220329182619-1328 --format={{.State.Status}}: (1.8722503s)
	I0329 18:40:07.963609    7444 status.go:328] multinode-20220329182619-1328 host status = "Stopped" (err=<nil>)
	I0329 18:40:07.963609    7444 status.go:341] host is not running, skipping remaining checks
	I0329 18:40:07.963609    7444 status.go:255] multinode-20220329182619-1328 status: &{Name:multinode-20220329182619-1328 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0329 18:40:07.963664    7444 status.go:253] checking status of multinode-20220329182619-1328-m02 ...
	I0329 18:40:07.978855    7444 cli_runner.go:133] Run: docker container inspect multinode-20220329182619-1328-m02 --format={{.State.Status}}
	I0329 18:40:08.446532    7444 status.go:328] multinode-20220329182619-1328-m02 host status = "Stopped" (err=<nil>)
	I0329 18:40:08.446591    7444 status.go:341] host is not running, skipping remaining checks
	I0329 18:40:08.446591    7444 status.go:255] multinode-20220329182619-1328-m02 status: &{Name:multinode-20220329182619-1328-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (34.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (155.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328 --wait=true -v=8 --alsologtostderr --driver=docker: (2m27.3215059s)
multinode_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr
multinode_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220329182619-1328 status --alsologtostderr: (6.7331327s)
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (155.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (123.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220329182619-1328
multinode_test.go:457: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328-m02 --driver=docker
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328-m02 --driver=docker: exit status 14 (328.6423ms)

                                                
                                                
-- stdout --
	* [multinode-20220329182619-1328-m02] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220329182619-1328-m02' is duplicated with machine name 'multinode-20220329182619-1328-m02' in profile 'multinode-20220329182619-1328'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328-m03 --driver=docker
E0329 18:43:22.753075    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:44:13.465911    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:44:16.194758    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
multinode_test.go:465: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220329182619-1328-m03 --driver=docker: (1m42.4861018s)
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220329182619-1328
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220329182619-1328: exit status 80 (3.7081442s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220329182619-1328
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220329182619-1328-m03 already exists in multinode-20220329182619-1328-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube_node_faf4be2af32ab6d64b40fb15c6239eaae2a98ae3_17.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220329182619-1328-m03
multinode_test.go:477: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220329182619-1328-m03: (16.8156532s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (123.62s)

                                                
                                    
x
+
TestPreload (307.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220329184512-1328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0329 18:47:19.422673    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220329184512-1328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (2m37.2435799s)
preload_test.go:62: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220329184512-1328 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220329184512-1328 -- docker pull gcr.io/k8s-minikube/busybox: (5.3306853s)
preload_test.go:72: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220329184512-1328 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0329 18:48:22.760584    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:49:13.477531    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:49:16.194449    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
preload_test.go:72: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220329184512-1328 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m9.0550802s)
preload_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220329184512-1328 -- docker images
preload_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220329184512-1328 -- docker images: (3.9821497s)
helpers_test.go:176: Cleaning up "test-preload-20220329184512-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220329184512-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220329184512-1328: (11.4736401s)
--- PASS: TestPreload (307.09s)

                                                
                                    
x
+
TestScheduledStopWindows (194.86s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220329185019-1328 --memory=2048 --driver=docker
E0329 18:51:25.946269    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:129: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20220329185019-1328 --memory=2048 --driver=docker: (1m41.8389923s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220329185019-1328 --schedule 5m
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220329185019-1328 --schedule 5m: (4.1657466s)
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220329185019-1328 -n scheduled-stop-20220329185019-1328
scheduled_stop_test.go:192: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220329185019-1328 -n scheduled-stop-20220329185019-1328: (4.6766464s)
scheduled_stop_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220329185019-1328 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220329185019-1328 -- sudo systemctl show minikube-scheduled-stop --no-page: (4.028306s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220329185019-1328 --schedule 5s
E0329 18:52:16.682557    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:138: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220329185019-1328 --schedule 5s: (3.7040986s)
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20220329185019-1328
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20220329185019-1328: exit status 7 (2.2674101s)

                                                
                                                
-- stdout --
	scheduled-stop-20220329185019-1328
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220329185019-1328 -n scheduled-stop-20220329185019-1328
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220329185019-1328 -n scheduled-stop-20220329185019-1328: exit status 7 (2.2076725s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220329185019-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220329185019-1328
E0329 18:53:22.763258    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220329185019-1328: (11.9587069s)
--- PASS: TestScheduledStopWindows (194.86s)

                                                
                                    
x
+
TestInsufficientStorage (80.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220329185550-1328 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:51: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220329185550-1328 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (1m2.0413537s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b4e9823-3155-4122-b5cb-6617d84c80ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220329185550-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a823000e-18f2-4cfd-b799-b2f019427bea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube8\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"5d88594a-c1cd-4831-8eb9-8bd6d8229a69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube8\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"e2581615-e221-4630-a316-71e37e97311b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13730"}}
	{"specversion":"1.0","id":"9930e216-5acd-4c73-89a7-28a866126bf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80d85f54-6288-4520-930f-5070e045d2e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"12c83a40-a65a-4fae-8f6d-e8657967d0df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d28bacf5-3f0c-4965-b0b8-539778a6403a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"640d6a9b-d501-46ed-9276-130afcd1cbdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220329185550-1328 in cluster insufficient-storage-20220329185550-1328","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7091218-044f-4ee8-831d-4a1b65451650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"39cbfcb0-1ab6-413b-b6f6-8994e93aeb65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4087124-b827-4c9d-b9a8-24ed136335f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220329185550-1328 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220329185550-1328 --output=json --layout=cluster: exit status 7 (3.9561027s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220329185550-1328","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220329185550-1328","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 18:56:56.870811    8792 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220329185550-1328" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220329185550-1328 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220329185550-1328 --output=json --layout=cluster: exit status 7 (3.9313823s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220329185550-1328","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220329185550-1328","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 18:57:00.803324    7184 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220329185550-1328" does not appear in C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	E0329 18:57:00.844189    7184 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\insufficient-storage-20220329185550-1328\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220329185550-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220329185550-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220329185550-1328: (10.36616s)
--- PASS: TestInsufficientStorage (80.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (298.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.1453368198.exe start -p running-upgrade-20220329190230-1328 --memory=2200 --vm-driver=docker
E0329 19:03:22.771795    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.1453368198.exe start -p running-upgrade-20220329190230-1328 --memory=2200 --vm-driver=docker: (2m39.2087284s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20220329190230-1328 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20220329190230-1328 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m57.9121423s)
helpers_test.go:176: Cleaning up "running-upgrade-20220329190230-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220329190230-1328

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220329190230-1328: (20.6626225s)
--- PASS: TestRunningBinaryUpgrade (298.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (590.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (7m10.0840004s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220329190043-1328
E0329 19:08:05.961732    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220329190043-1328: (28.0262932s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220329190043-1328 status --format={{.Host}}
E0329 19:08:22.762738    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220329190043-1328 status --format={{.Host}}: exit status 7 (2.345077s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker
E0329 19:08:56.703363    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:09:13.473400    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:09:16.207634    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker: (1m30.0926188s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220329190043-1328 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (359.8095ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220329190043-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220329190043-1328
	    minikube start -p kubernetes-upgrade-20220329190043-1328 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220329190043-13282 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220329190043-1328 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220329190043-1328 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker: (22.6170598s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220329190043-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220329190043-1328

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220329190043-1328: (16.5822662s)
--- PASS: TestKubernetesUpgrade (590.38s)

                                                
                                    
x
+
TestMissingContainerUpgrade (429.82s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.1.2252995643.exe start -p missing-upgrade-20220329190040-1328 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.1.2252995643.exe start -p missing-upgrade-20220329190040-1328 --memory=2200 --driver=docker: (3m48.1302264s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220329190040-1328
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220329190040-1328: (16.1400152s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220329190040-1328
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20220329190040-1328 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20220329190040-1328 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m28.2783933s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220329190040-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220329190040-1328

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220329190040-1328: (35.8785636s)
--- PASS: TestMissingContainerUpgrade (429.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (405.4918ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220329185711-1328] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube8\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube8\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13730
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (175.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --driver=docker: (2m50.9942918s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220329185711-1328 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:201: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-20220329185711-1328 status -o json: (4.9952055s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (175.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (391.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.785369038.exe start -p stopped-upgrade-20220329185711-1328 --memory=2200 --vm-driver=docker
E0329 18:58:22.765741    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 18:59:13.487926    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:59:16.201602    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.785369038.exe start -p stopped-upgrade-20220329185711-1328 --memory=2200 --vm-driver=docker: (4m41.1258688s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.785369038.exe -p stopped-upgrade-20220329185711-1328 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube8\AppData\Local\Temp\minikube-v1.9.0.785369038.exe -p stopped-upgrade-20220329185711-1328 stop: (22.3391358s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20220329185711-1328 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20220329185711-1328 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m28.3924292s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (391.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --no-kubernetes --driver=docker: (32.5008886s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220329185711-1328 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-20220329185711-1328 status -o json: exit status 2 (4.3920769s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220329185711-1328","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-20220329185711-1328
no_kubernetes_test.go:125: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-20220329185711-1328: (30.6268409s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (37.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --no-kubernetes --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220329185711-1328 --no-kubernetes --driver=docker: (37.9533178s)
--- PASS: TestNoKubernetes/serial/Start (37.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (4.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-20220329185711-1328 "sudo systemctl is-active --quiet service kubelet"

                                                
                                                
=== CONT  TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-20220329185711-1328 "sudo systemctl is-active --quiet service kubelet": exit status 1 (4.6313624s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (4.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (6.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220329185711-1328
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220329185711-1328: (6.9648202s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (6.97s)

                                                
                                    
x
+
TestPause/serial/Start (113.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220329190403-1328 --memory=2048 --install-addons=false --wait=all --driver=docker
E0329 19:04:13.484725    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:04:16.203852    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220329190403-1328 --memory=2048 --install-addons=false --wait=all --driver=docker: (1m53.7325583s)
--- PASS: TestPause/serial/Start (113.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220329190403-1328 --alsologtostderr -v=1 --driver=docker
pause_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220329190403-1328 --alsologtostderr -v=1 --driver=docker: (27.7250009s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.74s)

                                                
                                    
x
+
TestPause/serial/Pause (5.05s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220329190403-1328 --alsologtostderr -v=5
pause_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220329190403-1328 --alsologtostderr -v=5: (5.049103s)
--- PASS: TestPause/serial/Pause (5.05s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (4.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20220329190403-1328 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20220329190403-1328 --output=json --layout=cluster: exit status 2 (4.7467336s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220329190403-1328","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220329190403-1328","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (4.75s)

                                                
                                    
x
+
TestPause/serial/Unpause (4.99s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20220329190403-1328 --alsologtostderr -v=5
pause_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20220329190403-1328 --alsologtostderr -v=5: (4.994963s)
--- PASS: TestPause/serial/Unpause (4.99s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.27s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220329190403-1328 --alsologtostderr -v=5
pause_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220329190403-1328 --alsologtostderr -v=5: (5.2660038s)
--- PASS: TestPause/serial/PauseAgain (5.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (17.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20220329190403-1328 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-20220329190403-1328 --alsologtostderr -v=5: (17.1812851s)
--- PASS: TestPause/serial/DeletePaused (17.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (11.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.5812316s)
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220329190403-1328
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220329190403-1328: exit status 1 (563.9967ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220329190403-1328

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (157.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (2m37.7154452s)
--- PASS: TestNetworkPlugins/group/auto/Start (157.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-20220329190226-1328 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-20220329190226-1328 "pgrep -a kubelet": (4.0897937s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (21.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220329190226-1328 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-6p2nh" [720fdfbf-f0c7-46bd-b2a5-ef56f6d4ca36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-668db85669-6p2nh" [720fdfbf-f0c7-46bd-b2a5-ef56f6d4ca36] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 20.2139913s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (21.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329190226-1328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220329190226-1328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220329190226-1328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20220329190226-1328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5870884s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (137.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-weave-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: (2m17.0596403s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (137.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-weave-20220329190230-1328 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-weave-20220329190230-1328 "pgrep -a kubelet": (3.9931324s)
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (22.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20220329190230-1328 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-tvxpd" [81b1dcad-5e91-43a0-b0f7-69914a917408] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-tvxpd" [81b1dcad-5e91-43a0-b0f7-69914a917408] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 21.0390134s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (22.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (173.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker
E0329 19:18:16.192399    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:16.207806    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:16.222958    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:16.253795    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:16.299361    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:16.392463    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:16.564737    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:16.892680    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:17.536467    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:18.822465    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:21.389534    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:22.771959    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 19:18:26.510760    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:36.755889    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:18:57.246984    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:19:13.484017    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:19:16.206870    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 19:19:38.217223    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:20:39.448836    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p false-20220329190230-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (2m53.0962783s)
--- PASS: TestNetworkPlugins/group/false/Start (173.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-20220329190230-1328 "pgrep -a kubelet"
E0329 19:21:00.144544    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-20220329190230-1328 "pgrep -a kubelet": (5.1156359s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (23.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220329190230-1328 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-t252t" [7cd496a8-eeee-4b16-b2d3-921424530b08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-t252t" [7cd496a8-eeee-4b16-b2d3-921424530b08] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 23.0331915s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (23.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329190230-1328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20220329190230-1328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20220329190230-1328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20220329190230-1328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.6617832s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (369.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
E0329 19:22:51.482263    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:23:11.979339    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:23:16.193316    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:23:22.775149    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 19:23:43.996068    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:23:52.946253    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:24:13.482880    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:24:16.217392    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.
E0329 19:24:47.064517    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
E0329 19:25:14.871562    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:25:36.719109    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:26:03.318369    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:03.332443    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:03.347763    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:03.378219    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:03.424813    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:03.518444    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:03.692308    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:04.018685    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:04.671443    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:05.955705    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:08.520183    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:13.646620    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:23.897314    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:26:44.391655    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:27:25.360030    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:27:30.891155    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (6m9.6258757s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (369.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (387.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E0329 19:28:47.290152    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (6m27.8026713s)
--- PASS: TestNetworkPlugins/group/bridge/Start (387.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220329190226-1328 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220329190226-1328 "pgrep -a kubelet": (4.2443799s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220329190226-1328 replace --force -f testdata\netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-5mxb6" [545f417c-2e64-461e-ba50-5931a48f5cd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-668db85669-5mxb6" [545f417c-2e64-461e-ba50-5931a48f5cd2] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 34.0892783s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (35.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (667.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-20220329190226-1328 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (11m7.7642937s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (667.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220329193024-1328 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220329193024-1328 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (2m54.208362s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220329193024-1328 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [c6764042-30ea-472e-92da-871ed2782df0] Pending
helpers_test.go:343: "busybox" [c6764042-30ea-472e-92da-871ed2782df0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:343: "busybox" [c6764042-30ea-472e-92da-871ed2782df0] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0841816s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220329193024-1328 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220329193024-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220329193024-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.0155332s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220329193024-1328 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220329193024-1328 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220329193024-1328 --alsologtostderr -v=3: (16.1684631s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328: exit status 7 (2.2790931s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220329193024-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220329193024-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.3293946s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (4.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (427s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220329193024-1328 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220329193024-1328 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m2.6909655s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328: (4.3092036s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (427.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (4.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-20220329190226-1328 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-20220329190226-1328 "pgrep -a kubelet": (4.7565909s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (4.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (21.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220329190226-1328 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-7mp86" [d5b84bf2-0996-4fb0-89c9-733fcdb74c25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-7mp86" [d5b84bf2-0996-4fb0-89c9-733fcdb74c25] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 21.0734826s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (21.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (430.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220329193618-1328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220329193618-1328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0: (7m10.5855604s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (430.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-20220329190226-1328 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-20220329190226-1328 "pgrep -a kubelet": (4.0237573s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (4.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (20.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220329190226-1328 replace --force -f testdata\netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-p8m6t" [2cc5c7b3-7d2d-471a-9952-84bbbe6d7f5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:343: "netcat-668db85669-p8m6t" [2cc5c7b3-7d2d-471a-9952-84bbbe6d7f5f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 19.0970403s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (20.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-lwjmb" [7cc7a46e-fdec-4f7b-8596-752ed6e41194] Running
E0329 19:41:03.332739    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0335653s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-lwjmb" [7cc7a46e-fdec-4f7b-8596-752ed6e41194] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0299568s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220329193024-1328 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220329193024-1328 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220329193024-1328 "sudo crictl images -o json": (4.061848s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (4.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (26.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220329193024-1328 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220329193024-1328 --alsologtostderr -v=1: (4.6999785s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328: exit status 2 (4.2992379s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328
E0329 19:41:27.073543    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328: exit status 2 (4.2364255s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-20220329193024-1328 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-20220329193024-1328 --alsologtostderr -v=1: (4.4876482s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328: (4.2712876s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220329193024-1328 -n old-k8s-version-20220329193024-1328: (4.4515319s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (26.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (386.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220329194217-1328 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220329194217-1328 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5: (6m26.0150196s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (386.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (374.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220329194224-1328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5
E0329 19:42:26.509986    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\false-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:42:30.889061    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220329194224-1328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5: (6m14.6568021s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (374.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220329193618-1328 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [7ccd0d50-b340-4cf5-b582-3f01e9577175] Pending
E0329 19:43:29.967314    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
helpers_test.go:343: "busybox" [7ccd0d50-b340-4cf5-b582-3f01e9577175] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [7ccd0d50-b340-4cf5-b582-3f01e9577175] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0591954s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220329193618-1328 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220329193618-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0329 19:43:40.207970    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220329193618-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.0984879s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220329193618-1328 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220329193618-1328 --alsologtostderr -v=3
E0329 19:44:00.696426    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20220329193618-1328 --alsologtostderr -v=3: (16.3712069s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (4.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328: exit status 7 (2.4078949s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220329193618-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0329 19:44:05.040071    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220329193618-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.488312s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (4.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (633.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220329193618-1328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0
E0329 19:44:13.484615    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 19:44:16.221825    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220329193618-1328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0: (10m29.4187197s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328: (4.1596204s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (633.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (108.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220329194656-1328 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0
E0329 19:47:30.889998    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\custom-weave-20220329190230-1328\client.crt: The system cannot find the path specified.
E0329 19:47:49.536394    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:48:16.211253    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\auto-20220329190226-1328\client.crt: The system cannot find the path specified.
E0329 19:48:19.617033    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.
E0329 19:48:22.781081    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220329181027-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220329194656-1328 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0: (1m48.959076s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (108.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220329194224-1328 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [90fe6ebb-79f2-44fe-992d-884475d8d0ab] Pending
helpers_test.go:343: "busybox" [90fe6ebb-79f2-44fe-992d-884475d8d0ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:343: "busybox" [90fe6ebb-79f2-44fe-992d-884475d8d0ab] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 11.0456503s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220329194224-1328 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220329194217-1328 create -f testdata\busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [6591a56b-d5f7-4ba5-ba60-d6849b827110] Pending

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [6591a56b-d5f7-4ba5-ba60-d6849b827110] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [6591a56b-d5f7-4ba5-ba60-d6849b827110] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0512558s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220329194217-1328 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220329194656-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0329 19:48:47.441388    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220329194656-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.8500172s)
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (17.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220329194656-1328 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20220329194656-1328 --alsologtostderr -v=3: (17.1726847s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (17.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (4.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220329194224-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220329194224-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.2972866s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220329194224-1328 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (4.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220329194217-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220329194217-1328 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.3904616s)
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220329194217-1328 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (17.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220329194224-1328 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220329194224-1328 --alsologtostderr -v=3: (17.0458239s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (17.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220329194217-1328 --alsologtostderr -v=3
E0329 19:49:05.043028    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\enable-default-cni-20220329190226-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20220329194217-1328 --alsologtostderr -v=3: (16.2380002s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328: exit status 7 (2.4263267s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220329194656-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220329194656-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.2781451s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (4.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (55.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220329194656-1328 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0
E0329 19:49:13.490272    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220329194656-1328 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0: (50.0578826s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328
E0329 19:50:05.583612    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328: (5.6652864s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (55.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (4.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328: exit status 7 (2.3213003s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220329194224-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0329 19:49:16.222161    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\functional-20220329172957-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220329194224-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.3802259s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (4.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (4.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328: exit status 7 (2.3192398s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220329194217-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220329194217-1328 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.4546084s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (4.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (627.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220329194224-1328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220329194224-1328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5: (10m23.0267617s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328: (4.3034707s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (627.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (620.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220329194217-1328 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220329194217-1328 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5: (10m16.251622s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328: (4.321587s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (620.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220329194656-1328 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20220329194656-1328 "sudo crictl images -o json": (4.7060892s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (30.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220329194656-1328 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-20220329194656-1328 --alsologtostderr -v=1: (5.4166151s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328: exit status 2 (4.4897426s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328: exit status 2 (4.4374167s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-20220329194656-1328 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-20220329194656-1328 --alsologtostderr -v=1: (4.9433052s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328
E0329 19:50:33.384236    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328: (5.5077065s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220329194656-1328 -n newest-cni-20220329194656-1328: (6.0042276s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (30.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-sbz7w" [02bb77d0-272d-4706-92d8-e270b480b3fc] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0393766s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-sbz7w" [02bb77d0-272d-4706-92d8-e270b480b3fc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0996555s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220329193618-1328 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (4.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220329193618-1328 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20220329193618-1328 "sudo crictl images -o json": (4.3957484s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (4.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (28.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220329193618-1328 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20220329193618-1328 --alsologtostderr -v=1: (4.963137s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328: exit status 2 (4.5426921s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328
E0329 19:55:05.588634    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328: exit status 2 (4.502936s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20220329193618-1328 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20220329193618-1328 --alsologtostderr -v=1: (4.7182558s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328: (5.1599244s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220329193618-1328 -n no-preload-20220329193618-1328: (5.0401892s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (28.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-87tml" [d5e7bcaa-1c46-4e1d-9c57-78bdeef3867d] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0329 19:59:42.807457    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\old-k8s-version-20220329193024-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0393559s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-nk9z7" [cbfba2ce-16c6-42e3-a974-be2dddc8977a] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0351256s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-87tml" [d5e7bcaa-1c46-4e1d-9c57-78bdeef3867d] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0255067s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220329194217-1328 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-nk9z7" [cbfba2ce-16c6-42e3-a974-be2dddc8977a] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0329 19:59:51.298730    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\no-preload-20220329193618-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0208964s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220329194224-1328 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220329194217-1328 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20220329194217-1328 "sudo crictl images -o json": (3.9647673s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220329194224-1328 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220329194224-1328 "sudo crictl images -o json": (4.0611899s)
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (4.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (26.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220329194217-1328 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-20220329194217-1328 --alsologtostderr -v=1: (4.4933075s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328: exit status 2 (4.1885564s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328: exit status 2 (4.1844071s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-20220329194217-1328 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-20220329194217-1328 --alsologtostderr -v=1: (4.655877s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328: (4.5610429s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220329194217-1328 -n embed-certs-20220329194217-1328: (4.6043291s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (26.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (27.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220329194224-1328 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220329194224-1328 --alsologtostderr -v=1: (4.565389s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328
E0329 20:00:05.591640    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\bridge-20220329190226-1328\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328: exit status 2 (4.1610761s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328: exit status 2 (4.1254852s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220329194224-1328 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220329194224-1328 --alsologtostderr -v=1: (5.0248768s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328: (5.2219192s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220329194224-1328 -n default-k8s-different-port-20220329194224-1328: (4.2912303s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (27.39s)

                                                
                                    

Test skip (25/272)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (26.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 27.3412ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-wqxmc" [ccc35633-8f42-4ebf-9493-ef2ddc3c4572] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0355149s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-v6vc9" [06d018b4-4e85-4914-a286-d5d624734da7] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.1335634s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220329171625-1328 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220329171625-1328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220329171625-1328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (15.5943982s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (26.13s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (49.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220329171625-1328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220329171625-1328 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:183: (dbg) Done: kubectl --context addons-20220329171625-1328 replace --force -f testdata\nginx-ingress-v1.yaml: (6.07435s)
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220329171625-1328 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:196: (dbg) Done: kubectl --context addons-20220329171625-1328 replace --force -f testdata\nginx-pod-svc.yaml: (1.9599781s)
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [a43f19ca-45aa-4c69-bed8-fc2bf34a36b5] Pending
helpers_test.go:343: "nginx" [a43f19ca-45aa-4c69-bed8-fc2bf34a36b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [a43f19ca-45aa-4c69-bed8-fc2bf34a36b5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 37.1747837s
addons_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220329171625-1328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220329171625-1328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (4.0188635s)
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (49.90s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220329172957-1328 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:913: output didn't produce a URL
functional_test.go:907: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220329172957-1328 --alsologtostderr -v=1] ...
helpers_test.go:489: unable to find parent, assuming dead: process does not exist
E0329 17:39:13.448256    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:44:13.458252    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:45:36.632212    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:49:13.459382    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:54:13.462371    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 17:59:13.451480    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:02:16.646521    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:04:13.450899    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
E0329 18:09:13.455774    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:58: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) Run:  kubectl --context functional-20220329172957-1328 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1575: (dbg) Run:  kubectl --context functional-20220329172957-1328 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:343: "hello-node-connect-74cf8bc446-9gn2c" [03986580-e907-42a3-8745-215d29080fd3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:343: "hello-node-connect-74cf8bc446-9gn2c" [03986580-e907-42a3-8745-215d29080fd3] Running
E0329 17:34:41.267056    1328 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube8\minikube-integration\.minikube\profiles\addons-20220329171625-1328\client.crt: The system cannot find the path specified.
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.0532834s
functional_test.go:1586: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (26.72s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:547: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:194: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (44.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329181027-1328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220329181027-1328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.6349825s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329181027-1328 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-20220329181027-1328 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.422983s)
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329181027-1328 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:196: (dbg) Done: kubectl --context ingress-addon-legacy-20220329181027-1328 replace --force -f testdata\nginx-pod-svc.yaml: (1.3029305s)
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [abedc88d-ab0f-4b7c-bccb-24e1bc9c1f16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [abedc88d-ab0f-4b7c-bccb-24e1bc9c1f16] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 30.2781115s
addons_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220329181027-1328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220329181027-1328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.9066098s)
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (44.74s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:77: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (4.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220329190226-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220329190226-1328
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220329190226-1328: (4.4492822s)
--- SKIP: TestNetworkPlugins/group/flannel (4.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (6.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220329194218-1328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220329194218-1328

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220329194218-1328: (6.2491041s)
--- SKIP: TestStartStop/group/disable-driver-mounts (6.25s)

                                                
                                    
Copied to clipboard